So, that web page mentions the “VM network” port group that is a default port group that is created once you deploy an ESXi host. In my case, it was auto-deployed with different port groups and that one didn’t exist.
Hence, I decided to create a port group called “VM Network” in the host that I am trying to deploy the vCSA and…it worked!
Now, as you can see, I can see that port group and I was able to continue the installation with success!
It seems that with you must have this port group if you are deploying a vCSA at least from your PC, so, bear in mind if you are trying to deploy a new vCSA and you don’t have the default port groups when deploying a vCenter Server.
As said in previous articles, this series is only focused on VMware vSphere hence, VMware Horizon View is not contemplated (Linked clones are commonly used in that product).
What is a linked clone?
What can you see here?
In the previous image, you can see 3 different characters but they share something in common…the actor!
Linked clones are the same! A linked clone is a type of clone (a copy of a virtual machine) where the parent VM shares virtual disks with their clones.
The resulting linked clone will be created from the parent’s VM snapshot and because of being a snapshot, it will have the same state that was the snapshot was taken.
When the linked clone is created, it shares his own virtual disk (.vmdk file) with the snapshot from the parent VM, this leads to some unique features:
The clone will be dependent from the parent VM because they are sharing their virtual disks. If you delete the parent’s VM snapshot, it will corrupt the clone’s virtual disk.
Even both VMs are sharing their storage, any changes performed in the clone won’t affect the parent VM and vice-versa.
The linked clone will have the exact same data as the source VM because it was created from a snapshot.
The save spacings are obvious because the clone will only write the new modifications in its own virtual disk. So, the clone’s virtual disk size will be only the amount of data that changed after it was created!
Use or prepare a VM (Parent VM) that will be used as a master/parent to deploy the linked clones
Power-off the Parent VM (Recommended but not mandatory)
Perform a snapshot of the VM.
Time to create Linked clones referencing the snapshot we created previously.
Power-on the clones and customize them (apply customization specifications for example).
(Extra) Before powering-on the linked clone, perform another snapshot of the clone to use it as a rollback (if the end-user needs it).
(Extra) Power-on the clone and is ready to be delivered.
(Extra plus) If you decide to keep the linked clone for any reason, you can perform a full clone of it and it will become an independent VM!
In the next section, I will show you how to create a linked clone with PowerCLI from a Windows VM and in my case, I will use Custom Specifications within the script to launch the clone.
Here we have the VM that we’re going to use as our Parent VM:
– Name: SQLMasterVM – IP: 192.168.1.174 – Disk allocation: Around 35 GB summing both disks. – Domain: vmug.bcn
Inside the Guest OS:
Space allocated in DS:
What are we going to do?
Shutdown the master image VM that hosts some DBs.
Create a snapshot when the VM is powered-off to ensure that is consistent (this is a VM with a SQL installed so, even more recommended)
Perform the linked clone via PowerCLI.
Start the VM (We aren’t going to do the extra step of creating a snapshot of the clone) and use custom specifications to fully customize the clone.
All of this will be performed by this simple script:
##Creating SQL Linked Clone from a Parent VM "SQLMasterVM"
$OSSpec = Get-OSCustomizationSpec -Name 'Win-SQL'
$BaseVM = "SQLMasterVM"
$LinkedVM = "SQL-LC1"
# Delete snapshots on the Parent VM
Get-Snapshot -VM $BaseVM | Remove-Snapshot -Confirm:$falseStart-Sleep -Seconds 2
New-Snapshot -VM $BaseVM -Name "Linked-Snapshot" -Description "Snapshot for linked clones for $LinkedVM"
#Gather information of the created snapshot
$snapshotParent = Get-Snapshot -VM $BaseVM | Select Name
$snapshotParent = $snapshotParent.Name
Start-Sleep -Seconds 5
#Create Linked Clone referencing snapshot and start the VM.
New-VM -Name $LinkedVM -VM $BaseVM -Datastore "VMS" -ResourcePool (Get-Cluster -Name Gaiden-Cluster | Get-ResourcePool) -OSCustomizationSpec $OSSpec -LinkedClone -ReferenceSnapshot $snapshotParent -DiskStorageFormat Thin
Start-VM -VM $LinkedVM
In this script, I am also using the OSCustomizationSpec parameter, while using the sentence to create the linked clone, to change the IP, name and join again to the domain the resultant clone. Also, I am changing the SQL instance name in my case because it’s a server with MSSQL server installed.
Once the script finished, a new linked clone is created and powered on with the name “SQL-LC1”.
We can see the amount of time that takes to create a Linked clone (5 seconds):
And now look at the storage allocated by the Linked clone (powered off), 750 MB approximately:
After the Linked clone is created and powered on, you can do whatever you want.
I had to wait some minutes (around 10 min. in my case) until the OS customization specification finish all the actions specified (power on the VM, join to the domain, reboot the VM, execute a script to update SQL instance, etc.)
Here is the “real” space allocated after the Linked clone has booted up and I logged in with a user, around 4 GB:
A look inside the Guest OS of the linked clone (new hostname, IP and has the same storage as the Parent VM:
It’s commonly used in VDI and DEV environments but here are some examples:
DB server testing
File server testing
Benefits and limits
Let’s summarize which are the benefits and limits that we can find in linked clones:
Super fast cloning compared to a Full/Normal clone, it takes seconds instead of minutes to clone large VMs.
Space savings due to changes are stored in a separated disk (clone’s flat disk).
Useful for development environments or if you want to keep the clone just, perform a full clone of it!
Deploy as many linked clones as you want, they will reference the snapshot in the Parent VM hence, there is no disk chain on that (except for the snapshot you created of course) and the benefits of replicating .
Ongoing changes made in the virtual disk of the source VM don’t affect the linked clones and changes to the disk of the linked don’t affect the parent.
It can be performed with the parent VM powered on but, it will have some performance degradation and probably inconsistent data (if for example, the parent VM hosts a DB).
Recommended but not mandatory that the parent VM has to be powered off.
There is a storage/disk dependency as the linked clone is created from the parent’s VM snapshot then, if you delete that snapshot, inconsistencies will occur in the clone (and at the end you will delete it).
Performance on the destination clone will be impacted (as virtual machines are sharing storage)
Linked clones have multiple benefits compared to full clones and it has many use cases as we saw before.
You can easily replicate the status of a VM (snapshot) and deploy linked clones to your end-users with all the benefits as for example space savings or the deployment speed.
To end this series, we will look at instant clones, another type of clone that is even faster than linked clones but, with some particularities.
Continuing with the cloning virtual machines in vSphere series, today I am going to write about the full clone, how it works and some useful information about it.
So, let’s talk about clones… but just full clones.
How does it work?
As you probably know a full clone is an exact copy of a source VM, meaning that, everything from the parent VM is copied (VM settings, disks, files, etc.).
This action can be performed if the parent VM is powered off or powered on and, if it has snapshots it will consolidate them once the clone is done.
When you clone a VM be aware that, all data will be identical so, if you power on the clone without performing any customization, probably you will have conflicts with IPs, MAC addresses, SIDs (Windows), etc.
The great thing about a full clone is that, after the cloning operations are performed the clone will be an independent copy of a virtual machine that doesn’t share anything with the parent virtual machine (we are talking about from a compute and storage perspective within vSphere).
Ways to do it
First of all, you will need VMware vCenter to do it.
There are other ways (not official) like copying all data related to the virtual machine (.vmdk and .vmx files) and then register the “new” VM with another name.
Let’s continue with the usual ways:
vSphere Web Client
You can do it through vSphere Web Client, as simple as, right-click a VM -> “Clone to Virtual Machine…” :
Once it finishes, it takes some time (depends on the storage that the source VM has allocated) but in the end, you will have your new clone.
Likely you are more familiar about deploying templates…
Deploying a template is the same as cloning but. aside from copying the same data from the parent virtual machine, vSphere lets you customize the deployed VM for creating many clones and with different configurations as you wish.
Of course, you can do it with PowerCLI. These are the minimal parameters needed to perform it (Disk Storage Format parameter is optional but recommended because, by default, it will convert all disks to Thick Provision Eager Zeroed):
In the previous screenshot, you can see the minimum parameters required to perform a full clone, if you want to see more options you can check it here.
As you can see in the code, it’s similar to deploying a template, isn’t it?
The main use case is deploying from a template, maybe we are not aware but, deploying from a template is just cloning our source VM (Master template) and then customizing it.
I saw many customers use it as a “rollback” when they have to perform a destructive task within the Guest OS. In this way, just shutting down the parent VM and powering on the clone.
If you think a snapshot can do the same as a clone well, not always… some applications don’t handle well doing a quiesced snapshot.
This is why, as a solution, you can create a full clone when the virtual machine is powered off and then, have a copy that will be consistent and without corruption.
Another use case could be to perform a full clone to use it in other environments. Although there are better ways to do this (with other products), when the Guest OS has many customizations, this can be an alternative solution of re-creating the entire virtual machine.
Benefits and limitations
The benefits of a full clone were mentioned before:
If the cloning operation is executed when the source VM is powered off, it can be used as a rollback in many cases (there are better options like a VM backup but, it can help a lot).
Creation of an independent VM that shares nothing with the source VM.
Used in templates, so, they are very useful!
These are some limitations instead of disadvantages that we can find:
It takes some time to create a full clone (it depends on the allocated storage) as it has to copy all storage from the source VM.
It can only be performed with VMware vCenter (there are other ways as I explained before but they are not official).
If done when the VM is powered on, it has an impact on the source VM that can be noticed by the business so, isn’t the best option to do it while the virtual machine is in running.
To sum up, a full clone is a great way to have an identical copy of another VM to use it as a permanent virtual machine once you configure it accordingly.
As said before, is the same as deploying a template because you are just cloning a VM (deploying a template) and then customizing it.
It usually takes some minutes to finish the clone (depending on the storage allocated in the parent VM), this is why there are other ways to deploy clones in a faster way (on the next posts!).
Most of you already know how to clone virtual machines within vSphere, and I mean cloning from the vSphere Web client within vCenter but, beyond that, there are other types of clones you can use in vCenter like, Linked Clones or Instant Clones (aka Project Fargo/VMfork)
Due to the large content that can be discussed about each clone’s type, I decided to make a short series of posts talking about cloning VMs!
Types of clone
Here I will summarize each type of clone that exists in vSphere, some of them are used in different products or interfaces but in the end, all of them are accessible through PowerCLI.
This is the “classic” clone you can perform in vSphere Web Client no matter which is the VM’s status (powered on or off), that you can perform a copy of the VM.
If you want to perform a consistent clone, it’s recommended that you power off the VM and then perform the clone.
This is an independent copy and has no dependency from the parent virtual machine after the clone is complete (meaning that you can remove the parent VM if you need it).
The main advantage is that you can have a reliable copy of the Parent VM (remember this is not a backup) if you want to replace it. As this is a full copy of the VM (it will copy the entire disk), this might take several minutes depending on the size of the VM.
After you perform it, remember that everything will be the same then, all configuration (SID, network configuration, hostname, etc.) within the VM will be identical hance, it can lead to problems if both VMs co-exist at the same time without the proper configuration.
Is a clone made from a snapshot of the Parent VM. This means that both VMs (the Linked clone and the parent VM) have in common virtual disks.
So, the linked clone is dependent on the parent VM, meaning that the linked clone needs access to the parent VM. The clone must be done while the Parent VM is powered off (as a best practice).
Once a linked clone is performed, changes on the parent VM doesn’t affect the linked clone and in the other way, changes in the linked clone don’t affect the original VM. Mainly the benefits of using linked clones are:
Saving disk space because only the differences between the origin snapshot and the linked clone are allocated and the fast.
Quickly deploy tens or hundreds of VMs in a fast way as it doesn’t need to copy the entire disk.
This is a technology commonly used in VMware Horizon View to provide desktop deployment (rapidly deploy a lot of VMs). The thing is that we can also use it with PowerCLI without having Horizon View and use it for more use cases.
Similar to the linked clone, Instant Clone is like an improved version of linked clone technology. This is something “new” in vSphere 6.7 as is available through the API.
Like the linked clone technology, there is a parent VM which will share the disk with the clone (Instant clone) but, in this case, it will share the memory too (even if TPS is disabled).
There are two types of Instant Clones that I will explain in more detail in the next posts but, as a summary, you can do an instant clone from a source VM from a point in time and deliver many VMs (instant clones) as you want.
The Parent VM must be powered-on instead of powered off like other types of clones, in this way, it can provide even a faster way to deploy VMs because it will not require to power-cycle the Instant Clones.
As benefits, we will have the same as in Linked Clones technology plus memory efficiency (because it shares memory between VMs) and the ability to resume the VM in a point of time without power cycling the clone.
In the other hand, depending on which type of instant clone you can run with a lot of delta disks.
I tried to summarize each clone’s type that we can perform within vSphere and if you want to read more, stay tuned to go in more detail in the coming series of posts related to cloning VMs within vSphere.
Today let’s talk about vSphere Network I/O Control (NIOC) version 3 (vSphere 6.0), it’s a feature in the vSphere Distributed Switch that allows you to control granularly the output/egress bandwidth from a VM network adapter level. Besides there are other useful options within NIOC capabilities, today, I will focus only in the network adapter bandwidth limit for VMs.
Enable the feature in the dvSwitch (in our case the one with Data Network):Scenario:
2 VMs within 2 Networks (Portgroups in dvSwitch)
KenshiroVM is a VM with Ubuntu that simulates traffic with iperf as a client.
Win10Pro is a VM with iperf application configured as a server:
We will look in how Network I/O control (NIOC) let us limit the bandwidth granularly from a Virtual Machine (KenshiroVM), so, we will limit the bandwidth for a single NIC and see if it really works.
Lab time! I enabled NIOC in the dvSwitch that I have created for OS traffic (Data Network), dvSwitch is called “DSwitch_DataNW”. The other dvSwitch is “DSwitchMGMT” and NIOC is not enabled (no NIOC = no restrictions).
As I said before we have 2 networks:
Data Network: 10.10.6.0/24
Management Network: 192.168.1.0/24
1. Verify that the client (KenshiroVM) has no restrictions within the network.
2. Then, we will limit the Data Network adapter from KenshiroVM, launch iperf to simulate traffic and review the limitation configured.
3. Finally we will test iperf again but in the Management Network and review that we have no restrictions.
1.Currently, KenshiroVM has no restrictions configured (notice that in the blue rectangle there are no options for NIOC because that portgroup (Management) it’s located in another dvSwitch where we didn’t enable NIOC):
If we launch iperf command with 200Mbps on port 9999 from KenshiroVM:
We can see the traffic on the destination (Win10Pro) on the Data Network Adapter (you can see the subnet in the screenshot):
Also, we can review it in vSphere Web Client (25 MBps = 200 Mbps):
2. Now we are going to set a limit on KenshiroVM Data Network adapter to 88 Mbits:
Now, we perform the same command with iperf on the client (KenshiroVM):
Even pushing 200 Mbits through the Data Netowork Adapter using iperf, NIOC will limit the traffic to 88 Mbits as set before. Here is the traffic seen by Win10Pro Data Network adapter:
In KenshiroVM, iperf performed the transfer in 88 Mbps approximately:
3. Now, if we do the same experiment (150 Mbps) but in the adapter where NIOC is not enabled:
KenshiroVM confirmed that it performed at 150 Mbps approximately:
As a result of using vSphere NIOC, we can granularly set limits in the bandwidth in a VM network adapter and it will obey the settings configured. It only works for outbound traffic, if you set a limit in a destination VM adapter, then, NIOC will not make any restrictions regarding the inbound traffic.
It’s been a while since I posted something, but I was busy with university duties and to be honest I couldn’t spend time on publishing things but, here we are!
First, I am currently studying for the VCAP6 – DCV Deployment Exam (3V0-623) and I am learning a lot with it, so maybe I will post more topics related with.
Today I am going to talk about TCP/IP stacks in VMkernels (for vSphere 6). This is a thing that I didn’t care so much when I studied for the VCP6-DCV but now with the VCAP and all the time spent in the lab I thought that it was a great topic to talk!
So, get down to brass tacks.
Just as a reminder, a VMkernel port is a port you create in an ESXi host to connect with the “outside world” (outside the host), so when you want to communicate two hosts, each host will have a VMkernel port to communicate.
A TCP/IP stack is a set of networking protocols (Do you remember the OSI Model?) used to provide networking support for the services that it handles. So you can use different stacks to support in different ways a service within the stack.
A quick look at the services you can choose when creating a VMkernel port:
I am not going to explain each one because we are going to focus on vMotion and Provisioning traffic.
Continuing with TCP/IP stacks, when you create a new VMkernel in an ESXi host, you can choose which services do you want to enable:
Regarding vMotion and Provisioning TCP/IP stacks you could do it in two ways:
For vMotion, for example, you can do the following (this is the most common configuration, Default TCP/IP stack with a service Enabled):
Or (Dedicated TCP/IP stack):
I must admit I always use the first one, the Default TCP/IP stack with the service enabled, so which should we use, the dedicated stack or the default one?
Dedicated TCP/IP stack options
vMotion:It provides better isolation (more security), a separate set of buffers and sockets and avoids routing table conflicts than using the same TCP/IP Stack.
Provisioning: Used for cold VM migration (migrate power-off VMs), cloning and snapshot traffic.
So, I discussed with some people because I wanted to know which benefit could give you to use the dedicated TCP/IP stacks and this is what I gathered:
For vMotion: As a short answer, I would say, that if you need to do Cross vCenter vMotion you will need it because the dedicated stack gives you a Layer 3 VMkernel, meaning routing. With a dedicated stack, you can change the Gateway and DNS used in the default TCP/IP stack, meaning that you don’t have to use the same stack options that other services.do.
For Provisioning traffic: If you have massive data coming from snapshots or cloning, is better that you use this dedicated stack instead of the default one.
In the end, this is my recap and I hope it can help someone that is not familiar with it. Obviously, you could use the dedicated TCP/IP stack whenever you want but bear in mind that it will disable that service in the rest of the VMkernels.
Anyway, if you think that I missed or want to discuss something, let me know in the comments!