Upgrade your Hyper-V Nutanix block to NOS 3.5.4

img_nutanix-v5

Hi Guys,

I wanted to write a short post on the things I encountered when upgrading my Hyper-V Nutanix blocks to the latest NOS 3.5.4. The following information is provided “as is”, I have found it to work in my environment but I am not an official employee for Nutanix or Microsoft.

The general setup I am working with exists of 2 Nutanix clusters, both running NOS 3.5.3.1, the main reason I was pretty happy with the NOS 3.5.4 release is because it now includes VSS support for Hyper-V.  The following information can be retrieved from the release notes:

  • Volume Shadow Copy Service (VSS) support for Hyper-V hosts [FEAT-632]
  • Since the Microsoft VSS framework requires a full share backup for every virtual disk contained in the share, Nutanix recommends that customers plan their environment to accommodate no more than 20 virtual disks per SMB share for optimal backup performance. Going beyond the limit might result in unpredictable behavior such as backups failing. Multiple SMB shares are therefore recommended for environments with a large number of virtual disks.

I won’t be discussing the real upgrade steps, since these are already covered in the upgrade guide, however I will be explaining the post-installation tasks you have to do in order to join the Nutanix storage to Active Directory, changes to GFLAGS and a quick comparison of the Nutanix diagnostics.py results between NOS 3.5.3.1 and NOS 3.5.4.

Start op an SSH session and issue the “cluster info” command through the ncli and write down the cluster name.

screen27

The following commands pre-creates a computer account in AD. According to nutanix this is only supported with a server 2012 domain controller with a server 2012 domain functional level, I don’t see why since this works perfectly with a 2008 R2 domain controller and i’m even running server 2003 domain functional level. The key here is to have any member server running at least powershell 3 and has the active directory modules loaded (these can be found in the RSAT feature tools).

Replace fqdn, clustername and clusterip to your corresponding environment:

#adds a DNS-entry for your cluster with it’s IP

dnscmd.exe /recordadd ‘fqdn’ ‘clustername’ ‘A’ ‘clusterip’

#prompts for a password for the nutanix storage computer object
$password=(get-credential -username ‘clustername.fqdn’ -message “Please enter a password for the Nutanix storage computer object”).password

#pre-creates the nutanix storage computer object
New-ADComputer -Name clustername -SAMAccountName clustername -UserPrincipalName clustername@fqdn -PasswordNeverExpires:$true -cannotChangePassword:$true -AccountPassword $password -DisplayName ‘Nutanix storage cluster on clustername’ -Description ‘Nutanix storage cluster on clustername’ -DNSHostName clustername.fqdn

Now that the account was pre-created, we still need to attach it using the nutanix ncli:

screen28

Additionally you need to change a GFLAG setting for both clusters, note that this is a cluster-wide setting and is necessary to improve overall VSS performance, contact Nutanix support to have this changed for you.

I also kept track of the diagnostics output between NOS 3.5.3.1 and NOS 3.5.4. This really shows that the power is truly in the software, I’m already looking forward to the NOS 4.0 release, from what i’ve heard there will be another performance boost on the Hyper-V platform.

The following table lists the different diagnostics outputs before and after the upgrade on a NX-1450:

  NUTANIX CLUSTER
  NOS 3.5.3.1 NOS 3.5.4
Sequential write bandwidth (MBps) 447 542
Sequential read bandwidth (MBps) 1757 1764
Random read (IOPS) 49496 54343
Random write (IOPS) 22367 27696

As you can see my random write IOPS are 20% faster and the random read IOPS almost 10% faster, this all thanks to the software upgrade. Do note that you have interpret these values as a total number of IOPS in the cluster, mine consists of 4 nodes so in average you could say I can reach up to 13585 random read IOPS and 6924 random write IOPS per node!

That’s it. If you run into any problems, do contact Nutanix support, I must admit they have the best support I have ever encountered with. Thumbs up and keep up the good work!

I will try to cover the integration, maybe some best-practices with Veeam V7 in a next blog.

 

UPDATE

I had a nice chat with Nutanix support, they provided me with the following information which explaines why we can see a 20% of increase in performance. They have adapted the power plan settings in Server 2012 R2 to high performance.

If the cluster is running 3.5.4 the power profile should be already high performance. You can check it with:
“powercfg -l”

The same can be done in powershell ofcourse:

$targetServers = ‘ntnx-1′,’ntnx-2′,’ntnx-3′,’ntnx-4’
Invoke-Command -ComputerName $targetServers {
Try {
# No need to check if its currently active
powercfg -SETACTIVE SCHEME_MIN
} Catch {
Write-Warning -Message “Unable to set power plan to high performance”
}

# Print power setting
hostname
powercfg -getactivescheme
}
screen29

Davy

 

Exporting your Nutanix Hyper-V machines with Powershell

img_nutanix-v5

Hi Guys,

In this short blog post you will find a powershell script I created to quickly export my hyper-v virtual machines running on Nutanix. As you may or may not know, we can do this LIVE. Yes that’s right, there is no longer need to shut down your virtual machine before you can export it. This makes testing any changes on a production machine pretty easy. There is even a possiblity to export a snapshot of your virtual machine live. Anyway all of that are built in features of Hyper-V ofcourse.

First of all I’m guessing you know what Nutanix is and what it can deliver to you and your company so I’m jumping right in the setup I created. The main idea here was to have a script that I could execute on demand or via a scheduled task. Since I have multiple Nutanix nodes in my cluster I would need to call upon this script for every node. So in general this means I have created two scripts:

  • One to execute the actions to export the running virtual machines to another nutanix container and copy it over to a NAS
  • One that calls the above script once for every node and sends me a nice e-mail afterwards

I’m no powershell guru but this is something that you can easily use – maybe adapt it – in your own environment.

$date = Get-Date -format yyyy-MM-dd
$backupDir = “\\nutanix\CTR1\BACKUP\$date”
$Logfile = “$backupdir\log.log”

#Function to append data to the log file
Function LogWrite
{
Param ([string]$logstring)
Add-content $Logfile -value $logstring
}

import-module hyper-v
$vms = get-vm

foreach ($vm in $vms)
{
Try{
$vmname = $vm.name
$VMState = $vm.State
If ($vm.name -like “NTNX*”)
{
write-host “This is a controller vm, do not export”

}elseif($vmstate -eq “running”){
$timestamp = get-date
logwrite($timestamp)
logwrite($vmname, $vmstate,”Exporting the virtual machine “”$VMname””… “)
Write-Host “Exporting the virtual machine “”$vm””… ” -NoNewline
Export-VM -vm $vm -Path $backupdir
Write-Host “Successfully exported $vmname” -ForegroundColor Green
logwrite(“Succesfully exported $vmname”)
$timestamp = get-date
logwrite($timestamp)
logwrite(“”)
}
}
Catch
{
Write-Host “Failed” -ForegroundColor Red
logwrite(“Failed to export””$vmname”” to “”$backupdir”)
logwrite(“”)

}

}

 

If you were to run this interactively, you can verify this export progress easily. Note that this will only export the running virtual machines but excluding the nutanix controller vm’s:

screen25

Ofcourse the goal would be to have this script run at a certain time and this for every  node in your cluster. So far I have only adapted the scripts to call upon for every node. You could adapt this to first enumerate all nodes in the cluster and work with a for each loop. Try it out! 🙂

The log file looks something like this:

screen26

This brings us to the next script, running it for every node and then reporting afterwards.

#clean up nutanix container  from yesterday’s backup
$limit = (Get-Date).AddDays(-1)
$path = “\\nutanix\CTR1\BACKUP”
# Delete files older than the $limit.
Get-ChildItem -Path $path -Recurse -Force | Where-Object { !$_.PSIsContainer -and $_.CreationTime -lt $limit } | Remove-Item -Force
# Delete any empty directories left behind after deleting the old files.
Get-ChildItem -Path $path -Recurse -Force | Where-Object { $_.PSIsContainer -and (Get-ChildItem -Path $_.FullName -Recurse -Force | Where-Object { !$_.PSIsContainer }) -eq $null } | Remove-Item -Force -Recurse

#start export
invoke-command -ComputerName NTNX-XYLOS-1 ‘C:\scripts\export-vm.ps1’
invoke-command -ComputerName NTNX-XYLOS-2 ‘C:\scripts\export-vm.ps1’
invoke-command -ComputerName NTNX-XYLOS-3 ‘C:\scripts\export-vm.ps1’

#START WITH REPORT

$Logfile= “$backupdir\$date\log.log”
$smtpServer = “xxxx”
$date = Get-Date -format yyyy-MM-dd
$attlog = new-object Net.Mail.Attachment($Logfile)
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$msg.From = “xxxxxxxxxxx”
$msg.To.Add(“xxxxxxxxxxx”)
$msg.To.Add(“yyyyyyyyyyy”)
$msg.Subject = “Nutanix backup script – $date”
$msg.Body = “Attached is the Nutanix export log for $date”
$msg.Attachments.Add($attlog)
$smtp.Send($msg)
$att.Dispose()

And there you go!

Leveraging ODX to deploy a VM from a template on Nutanix

Image

The following information is provided “as is”, I have found it to work in my environment but I am not an official employee for Nutanix or Microsoft. If you want to learn more about Nutanix I can defintely recommend the following blogs:

Check out my previous blog post about Integrating Nutanix Hyper-V with SVMM 2012 R2: https://davyneirynck.wordpress.com/2014/04/17/integrating-a-nutanix-hyper-v-cluster-with-scvmm-2012-r2/

In this article I will explain and demonstrate what needs to be done to make sure you can deploy your virtual machines from a template using ODX. ODX stands for Offloaded Data Transfer and in short makes sure that the SAN or in our case Nutanix will be used to transfer the blocks instead of that those files needs to be transferred over the network. As you would guess by now, this makes deployment of your virtual machines go a lot more faster.

Currently ODX on Nutanix is invoked for the following operations:

  • In VM or VM to VM file copy on NDFS SMB share
  • SMB share file copy
  • Deploy template from SCVMM Library (NDFS SMB share)

In the PRISM UI, you will need to have 2 containers. One called “CTR1” in my case, used to store all my running virtual machines and a “nutanix_vmm”, the container that I use to store my ISO’s and ofcourse templates. I enabled compression on this second container and just watched how I got up to 69% compression, enabling me to save 30,15GB.

screen14

screen15

Don’t forget to edit the filesystem whitelist settings in the PRISM UI, enabling your VMM server to access the nutanix storage:

screen21

Add the IP-address of your servers in the following format: x.x.x.x/subnetmask (example: 10.32.1.5/255.255.255.0)

Let’s get started with the configuration in SCVMM, we need to add a library share that is pointing to our “nutanix_vmm” container for the placement of our templates, hence when we deploy a VM based on this template, ODX will kick in and will makes sure that the blocks are transferred to our other container. In my lab I have found that deploying a VM based on this template using ODX gives me a fully patched, domain joined, windows activated VM in less then 7 minutes. If i’m not using ODX this was around 17 minutes.

Go to library, right click your library server and change the properties so your run as account is linked as the management credentials of the library server.

screen24

Now select “Add Library Shares”

screen13

It’s very important not to add your FQDN to the Nutanix container or ODX will not work, I’m pretty sure this is a Microsoft issue.

screen16

After this the new library share can be used.

I created a new Generation 2 virtual machine, this provides the following new functionality on a virtual machine:

  • PXE boot by using a standard network adapter
  • Boot from a SCSI virtual hard disk
  • Boot from a SCSI virtual DVD
  • Secure Boot (enabled by default)
  • UEFI firmware support
  • Faster Boot Time and Faster Installation of Guest Operating System
  • IDE drives and legacy network adapter support has been removed

However, you must be running any of these operating systems to run a gen2 Hyper-V VM:

  • Windows Server 2012
  • Windows Server 2012 R2 Preview
  • 64-bit versions of Windows 8
  • 64-bit versions of Windows 8.1 Preview

When creating a template, I always follow these guidelines:

  • Create your vm and give it a clear name such as 2012r2_STD_gen2
  • Download and install all updates
  • Customize the settings you prefer such as windows firewall, IE SC, remote desktop, ..
  • Shut down the VM and perform a clone operation in SCVMM so we don’t loose this reference image (name this machine for instance 2012r2_STD_gen2_template)
  • Create a template in SCVMM on this cloned VM and save it in your newly created library share (nutanix_vmm)

The reason why you need to clone your machine is because your virtual machine will be destroyed in order to turn it into a template. I tend to re-create my template every 3 months, all you need to do is start up your original machine, install your updates and maybe some additional software that you use in your company, clone it again and create the template. Make sure you hold a document where you can track your changes.

When you have done all of this, your template will be shown in the library pane and you can right click on it to create a new virtual machine based on this template.

screen17

I configured my hardware settings as followed, nothing really special going on here:

screen11

The OS settings are a lot more fun, this is were you can really save some time. I’m using the new AVMA keys to activate my guest VM’s. Have a look at this Technet document  to find out what AVMA is: http://technet.microsoft.com/en-us/library/dn303421.aspx

Also, I can define here that this vm must join my domain, using the following credentials (the run as account I configured in my previous blog post).

screen19

Again, I’m guessing this is a Microsoft issue, you need to remove your FQDN on the destination path (e.g. \\nutanix-1234\CTR1) or your deployment will fall back to BITS over HTTPS.

screen20

For some reason there is still a bug in SCVMM when trying to create a virtual machine based on a gen2 template, it just comes up saying there is no boot device..

To resolve this, simply issue these powershell commands so the boot device of the template are changed to the SCSI adapter:

screen22

And finally watch how your virtual machine is deployed using fast file copy

screen23

 

In a next article I will discuss the setup of a powershell script I wrote to export my running virtual machines to a Synology NAS.

Integrating a Nutanix Hyper-V cluster with SCVMM 2012 R2

Image

As the title says, I will try to give you the necessary information to succesfully manage your Nutanix Hyper-V cluster with System Center Virtual Machine Manager 2012 R2. This information is provided “as is”, I have found it to work in my environment but I am not an official employee for Nutanix or Microsoft. If you want to learn more about Nutanix I can defintely recommend the following blogs:

The least I can say about this setup, is that it was a bit challenging to make it work. Let’s get started with the pre-requisites about this setup:

  • A server running System Center Virtual Machine Manager 2012 R2
  • Your Nutanix Hyper-V cluster is up and running (I will cover this in a later blog)

Log on to you VMM server and execute the following powershell command in an elevated prompt to allow the host to access unsigned storage:

Set-SmbClientConfiguration –RequireSecuritySignature $false -force

Fire up the VMM console and start up the Add Hyper-V Hosts and Clusters wizard

Image

The nutanix nodes are already domain joined, the Hyper-V failover cluster is already created so I can select the “Windows Servers computers in a trusted Active Directory domain”
Image
I manually entered the credentials here (typically a domain admin account). We will cover the run as acount a bit later in this post.Image
Add the cluster name or if you really want to be precise all individual nodes in the cluster.Image

 

Since we are adding a cluster, all nodes are automatically added as well. As you can see here I’m using 2 Nutanix blocks, each consisting of 4 nodes.
Image
Specify your host group
Image
The wizard ends here, you can click finish and your cluster will be added to the VMM console. Make sure that after you created your hyper-v failover cluster you rebooted all of your hosts, otherwise VMM will come up and say you need to reboot them before he can complete the adding cluster wizard.
Image
To be able to use the run as account in VMM 2012 R2, we first need to create a new user that has local admin rights on all nutanix nodes. I also used this account to join my virtual machines to the domain when I deploy them from a template. In order to make this work, use the delegate control in Active directory users and computers and grant this account the “add computers to the domain”.

We added the vmm_nutanix account to the local administrators group of every cluster node. We also need to configure this account in VMM and attach it to the cluster as the run as account.
Image

Execute the following powershell script to assign the newly created run as account to your cluster

$cluster = get-scvmhostcluster –name name_of_your_cluster

$runas = get-scrunasaccount –name “vmm_nutanix”

Set-scvmhostcluster –VMHostCluster $cluster –VMHostManagementCredential -$runas

We can now add the file share storage into the nutanix cluster. Note that this nutanix container will be used to host all virtual machines (in our scenario)

Right click on your cluster and navigate to file share storage

add the following path as file share storage: \\nutanix-1234\CTR1 where you need to replace nutanix-1234 to the DNS entry you created during the setup of your cluster.

Image

If you configured everything correctly, the status of the file share should be shown as followed

Image

 

And tadaa, you should be ready to deploy your VM’s from SCVMM. In another blog post I will cover the steps on how Nutanix can leverage ODX to deploy virtual machines based on a template that resides on an NDFS SMB share.

Stay tuned