Archive for the ‘Storage Spaces’ Tag

#Azure Storage Spaces direct #S2D Standard Storage vs Premium Storage   1 comment

I see this often in the Forums Should I use Standard Storage or should I use Premium storage. Well it Depends Premium cost more that Standard but even that depends in the basic. Can a $ 4000 Azure Storage space configuration  out perform a $ 1700 Premium configuration. this blog post is not on how to configure Storage spaces but more an overview on concepts, did I pick the right machine or did I build the right configuration well it all depends.

I love the HPC vm sizes but also expensive.

So in these setups I did create a storage space direct configuration all almost basic. but Key is here pick the Right VM for the job.

Standard 6 node cluster 4 core 8GB memory total disks 96 Type S30 (1TB) RAW disk space 96TB  and 32TB for the vDisk

Premium 3 node Cluster 2 core 16GB memory Total disks 9 Type P30 (1TB) RAW disk space 9TB  and 3TB for the vDisk

Standard A8 (RDMA) 5 node cluster 8 core 56GB memory total disks 80 Type p20 (500GB) RAW disk space 40TB

So basically comparing both configs makes no sense Couse  both configs are different. bigger machines vs little VM

and a lot less storage.

Standard Storage storage vs Premium

The performance of standard disks varies with the VM size to which the disk is attached, not to the size of the disk.


So the nodes have 16 disk each 16 * 500 IOPS  and with a max bandwidth of 480 Mbps. that could be a issue as would I use the full GB network than I need atleast  125 MB/s


In the Premium it is all great building the same config as in the standard the cost would be $3300 vs $12000. If you have a solution and you need the specifications then this is the way to go.

Can I out perform the configuration with standard disks ? In an old blog post I did the performance test on a 5 node A8 machine and 16 premium storage P20- 500GB 40TB RAW and got a network throughput of 4.2Gbps 


Measurements are different on different machines and basically there is no one size fits all it all depends on the workload or config or needs.

using the script from (by Mikael Nystrom, Microsoft MVP) on the basic disk not very impressive list  high latency but that’s the Standard storage.


The premium Storage is way faster and constant. So when using Azure and you need an amount of load or VM’s there is so much choice if you pick a different machine the results can be better. when hitting the IOPS ceiling of the VM. Prepare some calculations when building your new solution.  Test some configurations first before you go in production.

Azure is changing everyday today this may be the best solution but outdated tomorrow.

Below are some useful links on the Machine type and storage type.


Thanks for reading my blog. Did you check my other blog post about Azure File Sync :



Follow Me on Twitter @ClusterMVP

Follow My blog

Linkedin Profile Http://

Google Me :

Bing Me :


Posted November 9, 2017 by Robert Smit [MVP] in Windows Cluster, Windows Server 2016

Tagged with

Deploying Storage Spaces Direct with VMM 2016 or with Powershell #Cloud #hyperconverged #SysCtr #S2D   1 comment

Windows Server 2016 comes with al lot of new options and Hyper-converged is one of the new options. In this blog post I’ll show you what options you have when using VMM and S2D. The tools are great but so is PowerShell and it always depends on what and how you are building things.

Storage Spaces Direct is a bit like building a Do It Your Self San multiple heads lots of Storage can lose one Head , low costs.

Storage Spaces Direct seamlessly integrates with the Hyper-V / Files Servers you know today. The Windows Server 2016 software defined storage stack, including Clustered Shared Volume File System (CSVFS), Storage Spaces and Failover Clustering.

The hyper-converged deployment scenario has the Hyper-V servers and Storage Spaces Direct components on the same cluster. Virtual machine’s files are stored on local CSVs. This allows for scaling Hyper-V clusters together with the storage it is using. Once Storage Spaces Direct is configured (Enable-ClusterS2D) and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster. but now with System Center Virtual Machine Manager 2016 we can also configure this during the deployment.

Hyper-Converged Stack

Above are the layers shown, as you can see the Storage is defined in 3 parts physical disks, spaces and the CSV volumes.

So basically we can configure the cluster with Storage Spaces Direct by hand (PowerShell) or if you are using VMM you can do this by using templates and the GUI. but is this the same and is this handy ? The only change I did in this post is create a Scale out file server to use the Storage Spaces Direct volumes.

Well it is nice that you can do this but when configuring this by hand it gives you much more flexibility and configuration and yes maybe more complex but understanding the method is better than following a wizard.

Let see the options we have in VMM there are a couple of ways to configure this it all depends.


Create a Hyper-V cluster and tap the enable Storage Spaces Direct option.



Or Create a Scale Out file server and check what you want shared Storage or enable Storage Spaces Direct option.

But you can also Create the cluster in VMM and configure later the Storage Spaces Direct. The fact is that VMM 2016 can create and maintain the Storage layer. all from a single interface.

So for this demo I use 4 Servers Sofs02,Sofs04,Sofs06,Sofs08 each server has 8 local Disks



These 4 servers will be transformed to a Storage Space Direct Cluster

first let me check of all the disks on the server.

Get-PhysicalDisk | ? CanPool -EQ 1 | FT FriendlyName, BusType, MediaType, Size


Storage Spaces Direct uses BusType and MediaType to automatically configure caching, storage pool and storage tiering. In Hyper-V virtual machines, the media type is reported as unspecified. So if you are using tools that are expecting certain types of disk you need to fix this.

else when running the cluster validation the cluster creation will fail.

Found a disk with unsupported media type on node ‘Sofs02.mvp.local’. Supported media types are SSD and HDD.


Step one is creating a Hyper-v cluster.



As my servers are in the Storage VMM host group I’ll pick this group. Give the cluster a name and Check the Storage Spaces Direct check box.

So typical when creating this by hand you would do this in PowerShell

install-WindowsFeature "Failover-Clustering","RSAT-Clustering" -IncludeAllSubFeature –ComputerName “sofs02”,”sofs04”,”sofs06”,”sofs08”

Test-Cluster -Node “sofs02”,”sofs04”,”sofs06”,”sofs08”

New-Cluster –Name Democlu201 -Node “sofs02”,”sofs04”,”sofs06”,”sofs08”  -NoStorage -StaticAddress ""

Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 –SkipEligibilityChecks  ( as you are running VHDX disks )

The big difference here is you can’t customize this cluster during this step so no Quorum or any other settings.


Selecting all the nodes


Giving the Cluster a Fix IP or pick one random from the IP pool


All the tasks are running an in a few minutes we have a Cluster that holds a Storage Space Direct unless it Fails the cluster validation test.

If you are using the S2D you must run the Cluster validation test and remember only SSD and HDD media type Disks are supported. So if the media type is unspecified or unknown the Validation report will fail and so is this job.


In our case the job went successful and the cluster with Storage Space Direct is ready for usage.


Now that the cluster is ready you can use the Storage after creating the pool.

And if you already have build a hyper-converged  Cluster Hyper-V servers and Storage Spaces Direct components then you can us this also in VMM.


Now that the Cluster is added we can create a Pool.

In case you build the Storage Spaces Direct with Powershell you end up with something like this :

#Create storage pool 
New-StoragePool  -StorageSubSystemName Pool01.mvp.local –FriendlyName Pool01 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -Name Pool01.mvp.local| Get-PhysicalDisk)

#list Storage pool

Get-StoragePool Pool01

#removal of the Storagepool 
Remove-StoragePool –Name Pool01.mvp.local

But when using the VMM Gui tool you will not get the friendly name as when you do this in Powershell



But this is easy changable

TO check if the Cluster Storage Spaces Direct is enabled you can run a PowerShell command


Or check your Cluster under Storage en Enclosures Every server is listed as his own enclosure.


Now that the Enclosures are listed We create the pools and the disks


We select the Clustered Pool and do manage to create the Virtual disk



We create a New Pool and if you not created a Classification you will need to do this to.

Give this a name and Pick the disk that you want I select all the Disk and use them for one big Pool.


Now that we have selected all the disk and created the pool we can create a Virtual disk on the Pool


Creating the disk can be a little confusing in the VMM GUI as you need to press Cancel and OK.


Give the disk a name


Pick the right Size as my pool s 168GB and I can only do a Mirror you understand I can’t create a 160Gb disk, I have 4 nodes press ca


How can this guide help you? You can use this guide and the Software-Defined Storage Design Calculator spreadsheet to design a storage solution that uses the Storage Spaces and Scale-Out File Server functionality of Windows Server 2012 R2 along with cost-effective servers and shared serial-attached SCSI (SAS) storage enclosures.

#Create virtual disks
New-Volume –StoragePoolFriendlyName Pool01 -FriendlyName CSV02 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS –Size 48GB

As you can see I created a Scale out file server and used the Storage Spaces Direct as storage.


#create Cluster
New-StorageFileServer –StorageSubSystemName DemoClu201.mvp.local –FriendlyName Demosofs201 -HostName Demosofs201.mvp.local -Protocols SMB

#Create file shares and Folders 
md C:\ClusterStorage\Volume1\shares\VM01

New-SmbShare –Name VM01 -Path C:\ClusterStorage\Volume1\shares\VM01 -FullAccess "mvp\Domain Admins"


Now that the File share and SOFS is in place we can add the share to the hyper-v server or cluster for usage.


When creating a VM we can use the Storage Spaces Direct to place a VM but as you can see in this post there are several methods to do things and each option has a different choice the right one well it is all on you and it depends. see this table below with the pros and cons.

Storage Spaces deployment tools




Failover Cluster Manager & Server Manager

  • Easy to use

  • Slow automatic refreshes in Server Manager when working with storage

  • Some tasks require Windows PowerShell

  • No automation can make provisioning more than a couple virtual disks and file shares tedious

System Center Virtual Machine Manager

  • Easy to use

  • Partial automation of cluster deployment

  • Automated management of file share permissions

  • Can be used to deploy and manage VMs

  • Some tasks require Windows PowerShell (including storage tiers)

  • Requires System Center licenses

  • Might require additional infrastructure if you don’t already have System Center or are deploying at a scale that’s greater than your existing deployment can handle

Microsoft Deployment Toolkit

  • Lots of control over operating system installation options

  • Can be used to deploy other PCs and servers

  • Can be complex

  • Some approaches require System Center Configuration Manager licenses

Windows PowerShell

  • Complete control over all aspects of storage

  • Can automate by writing scripts

  • Requires knowledge of Windows PowerShell

  • Scripts require development and testing

After writing this post I can see If you should do this then I would use PowerShell to build the cluster and Storage Spaces direct and add them to VMM but for deploying the basics VMM could be very handy but it all depends on your infra structure.

The VMM option is really great but for me it takes to long to do stuff and often the job fails because I made a typo or the naming is not the way I want it. And the usage of the Storage Spaces Well the Hyper-Converged option vs the Converged option it has it challenges and it all depends on the hardware you have. but for my testlab or in Azure S2D runs great.

Follow Me on Twitter @ClusterMVP

Follow My blog

Linkedin Profile Http://

Google Me :

Bing Me :


Posted August 22, 2016 by Robert Smit [MVP] in Windows Server 2016

Tagged with ,

System Center 2016 VMM Place template VM in Custom OU #sysctr #Cloud #Deploy #VM   2 comments

when using VMM and deploying templates you not always want to place them in the default OU computers


But instead you want the Template Server 2016 places in OU TP5 and Hyper-V server directly placed in the OU Hyper-v.

Default there is no Gui item in the VMM console to do this. Say on the domain join tab place this VM in the Hyper-V OU


Instead of this you need to fill in the Value in Powershell. and Make a custom OU field.


You can Add Custom Properties as you like.

But first we are creating a Custom Guest OS profile this profile is the basis for the new build template and the Custom OU Placement.


Now that the Custom OS profile is in place we can check it there is a domain OU field



this shows us the field that we must fill in to get the right OU placement.

Get-SCGuestOSProfile |select Name


Get-SCGuestOSProfile -name "Guest OS 2016TP5"

Setting this in the OS profile

Get-SCGuestOSProfile -name "Guest OS 2016TP5" |Set-SCGuestOSProfile -DomainJoinOrganizationalUnit "OU=SCVMM16,DC=MVP,DC=local"


Now when I create a new template with this OS profile the VM is place in the SCVMM16 OU but it is not anywhere visible in the GUI.

and what if I have already build templates how to place them in Custom OU.

Yes you can do this. First I select all the templates to pick the right one

Get-SCVMTemplate |select name



$template = Get-SCVMTemplate | where {$_.Name -eq "ws2016G2"}
$template |select name


As I made the OU a variable :

$ou = "OU=SCVMM16,DC=MVP,DC=local"

Set-SCVMTemplate -VMTemplate $template -DomainJoinOrganizationalUnit $ou



So now the Template has a custom OU also.

But still there is no GUI property to show this. therefore go to the Template and create a Custom Property


go to the Manage custom Properties


Select Virtual Machine Template Properties give it a name “ custom OU “ and assign this to the template


Now that tis is assigned we can enable this in the GUI


But before we get any value in this field we need match this with the PowerShell Value DomainJoinOrganizationalUnit


Get-SCVMTemplate | %{ Set-SCCustomPropertyValue -InputObject $_ -CustomProperty $(Get-SCCustomProperty -Name "Custom OU") -Value $_.DomainJoinOrganizationalUnit }



As you can see there is an error this is because one template has no value.

image image


Now With new deployments the VM’s will be places in the Custom OU




Follow Me on Twitter @ClusterMVP

Follow My blog

Linkedin Profile Http://

Google Me :

Bing Me :


Using Windows Storage Spaces direct with hyper converged in Microsoft Azure with Windows Server 2016   7 comments

Sometimes you need some fast machines and a lot of IOPS. SSD is the way to go there but what if your site is in Azure ?

Well build a high performance Storage space is Azure. Remember this setup will cost you some money or burn your MSDN credits is just one run.  

My setup is using several storage account and a setup of a 5 node cluster with a cloud witness and each node has 16 disk.


As the setup is based on Storage spaces direct I build a 5 node cluster. Some options are not needed but for my demo I need them in case you thought he why did he install this or that.

So building the Cluster

get-WindowsFeature Failover-Clustering
install-WindowsFeature "Failover-Clustering","RSAT-Clustering","File-Services", "Failover-Clustering","RSAT-Clustering -IncludeAllSubFeature –ComputerName "rsmowanode01.AZUTFS.local"

I add the other nodes later.

#Create cluster validation report
Test-Cluster -Node "rsmowanode01.AZUTFS.local “
New-Cluster -Name Owadays01 -Node "rsmowanode01.AZUTFS.local" -NoStorage -StaticAddress ""

Now that my cluster is ready I added some disk to the VM’s and place them in several storage accounts. ( you can expand the default just make a Azure helpdesk request )

I have currently   image not all needed but you will never know.

imageAs I prep all my Azure VM’s in PowerShell here is an example on how to add the disk to the azure VM. As I need 16 disk for 5 nodes that are 80 disk with a 500 GB size 40 TB raw disks.





The powershell Sample command to create the disks.

Get-AzureVM -Name $vmname -ServiceName $vmname |
    Add-AzureDataDisk -CreateNew -DiskSizeInGB 500 -DiskLabel ‘datadisk0’ -LUN 0 -HostCaching None | 










Now that the Cluster is ready and the disk are mounted to the Azure VM’s it is time for some magic

With the : Get-Disk | Where FriendlyName -eq ‘Msft Virtual Disk’|Initialize-Disk -PartitionStyle GPT -PassThru

all disk are online I do not need to format them as the disk are getting pooled


As every node gets his own storage enclosure

To enable the Storage space direct option you will need this to enable


what you just did is making the local disk turn in to usable cluster disk.



to create a basic Storage pool

New-StoragePool  -StorageSubSystemName Owadays01.AZUTFS.local -FriendlyName OwadaysSP01 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -friendlyname "Clustered Windows Storage on Owadays01" | Get-PhysicalDisk)


|Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "IODisk" -AllocationUnitSize 65536 -Confirm:$false



#Query the number of disk devices available for the storage pool
(Get-StorageSubSystem  -Name Owadays01.AZUTFS.local | Get-PhysicalDisk).Count



Mirror storage spaces

Mirroring refers to creating two or more copies of data and storing them in separate places, so that if one copy gets lost the other is still available. Mirror spaces use this concept to become resilient to one or two disk failures, depending on the configuration.

Take, for example, a two-column two-way mirror space. Mirror spaces add a layer of data copies below the stripe, which means that one column, two-way mirror space duplicates each individual column’s data onto two disks.

Assume 512 KB of data are written to the storage space. For the first stripe of data in this example (A1), Storage Spaces writes 256 KB of data to the first column, which is written in duplicate to the first two disks. For the second stripe of data (A2), Storage Spaces writes 256 KB of data to the second column, which is written in duplicate to the next two disks. The column-to-disk correlation of a two-way mirror is 1:2; for a three-way mirror, the correlation is 1:3.

Reads on mirror spaces are very fast, since the mirror not only benefits from the stripe, but also from having 2 copies of data. The requested data can be read from either set of disks. If disks 1 and 3 are busy servicing another request, the needed data can be read from disks 2 and 4.

Mirrors, while being fast on reads and resilient to a single disk failure (in a two-way mirror), have to complete two write operations for every bit of data that is written. One write occurs for the original data and a second to the other side of the mirror (disk 2 and 4 in the above example). In other words, a two-way mirror requires 2 TB of physical storage for 1 TB of usable capacity, since two data copies are stored. In a three-way mirror, two copies of the original data are kept, thus making the storage space resilient to two disk failures, but only yielding one third of the total physical capacity as useable storage capacity. If a disk fails, the storage space remains online but with reduced or eliminated resiliency. If a new physical disk is added or a hot-spare is present, the mirror regenerates its resiliency.

Note: Your storage account is limited to a total request rate of up to 20,000 IOPs. You can add up to 100 storage accounts to your Azure subscription. A storage account design that is very application- or workload-centric is highly recommended. In other words, as a best practice, you probably don’t want to mix a large number of data disks for storage-intensive applications within the same storage account. Note that the performance profile for a single data disk is 500 IOPs. Consider this when designing your overall storage layout.


Now that the storage pools are in place we can do some measurements on the Speed creating disk and iops. based on Refs and NTFS

these disk I’m using for the Scale out file server

New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskREFS14 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS –Size 2000GB


New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskNTFS15 -PhysicalDiskRedundancy 1 -FileSystem NTFS –Size 20GB



With some disk changes and creation you can say the REFS with clustered shared volume is about 100x as fast!


Now that we have Cluster Storage I’m using this for the SOFS.

#create the SOFS 
New-StorageFileServer -StorageSubSystemName Tech-SOFS.AZUTFS.local -FriendlyName Tech-SOFS -HostName Tech-SOFS -Protocols SMB



Adding the disk and the next test is ready.


First we make a couple a disk on the REFS share 



so a 1TB disk creation is not much slower than a 100GB file remember these are fixed files.

When I do this on the NTFS volume and create a 100GB fixed disk this took forever after 10 Min I stopped the command. this is why you always do a quick format on a ntfs disk.


A 1Gb disk creation is a better test as you can see this is around 8 times slower with a 1000x smaller disk.



Let test IOPS for this I use the DISKSPD tool : Diskspd Utility: A Robust Storage Testing Tool (superseding SQLIO)


So the disk creation is way way faster and when using this in a Hyper-v deployment the VM creation is way way faster en also the copy of files.


image image

I did only READ test ! If you want also the Write test use –w1  the –b is the block size

Testing on REFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat


When using a little 10 sec burst we got high rates but this is not the goal.

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat


Testing On NTFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-NTFS01\testfil1e.dat





So basically you get much more IOPS then on a normal single disk but it all depends on block size configuration and storage usage normal or premium.

The main thing is if you want fast iops and machines it can be done in Azure it Will cost you but it is also expensive on premise.

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b4K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

and with several runs you can get some nice results


but the the config I used is around the $30. total per hour

A8 and A9 virtual machines feature Intel® Xeon® E5 processors. Adds a 32 Gbit/s InfiniBand network with remote direct memory access (RDMA) technology. Ideal for Message Passing Interface (MPI) applications, high-performance clusters, modeling and simulations, video encoding, and other compute or network intensive scenarios.

A8-A11 sizes are faster than D-series



Robert Smit

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted January 5, 2016 by Robert Smit [MVP] in Windows Server 2016

Tagged with ,

What’s new in Windows Server 2016 Failover Cluster overview Get-ClusterDiagnostics Enable-ClusterStorageSpacesDirect #winserv #windowsserver2016   Leave a comment

A while a go I created a blog post about all the new properties in Windows Server 2016 Clustering.

Well now that we are close the the RTM version a lot of things has changed and naming is different so time for a refresh with a new twist.

When I created this blog


New options for the Storage Spaces Direct are in place

There is now a powershell command for this so no need for Dasmode=1

Disable-ClusterStorageSpacesDirect  Or  Enable-ClusterStorageSpacesDirect  


And a lot of new Options are there in the Cluster in the next post I’ll dig them up and show the options.

But what If we check the Powershell Commands.

Get-Command -Module failoverclusters

PS C:\Windows\system32> Get-Command -Module failoverclusters

CommandType     Name                                               Version    Source                                                            
———–     —-                                               ——-    ——                                                            
Alias           Add-VMToCluster                              FailoverClusters                                                  
Alias           Disable-ClusterS2D                           FailoverClusters                                                  
Alias           Enable-ClusterS2D                            FailoverClusters                                                  
Alias           Remove-VMFromCluster                         FailoverClusters                                                  
Function        Get-ClusterDiagnostics                       FailoverClusters                                                  
Cmdlet          Add-ClusterCheckpoint                        FailoverClusters                                                  
Cmdlet          Add-ClusterDisk                              FailoverClusters                                                  
Cmdlet          Add-ClusterFileServerRole                    FailoverClusters                                                  
Cmdlet          Add-ClusterGenericApplicationRole            FailoverClusters                                                  
Cmdlet          Add-ClusterGenericScriptRole                 FailoverClusters                                                  
Cmdlet          Add-ClusterGenericServiceRole                FailoverClusters                                                  
Cmdlet          Add-ClusterGroup                             FailoverClusters                                                  
Cmdlet          Add-ClusteriSCSITargetServerRole             FailoverClusters                                                  
Cmdlet          Add-ClusterNode                              FailoverClusters                                                  
Cmdlet          Add-ClusterPrintServerRole                   FailoverClusters                                                  
Cmdlet          Add-ClusterResource                          FailoverClusters                                                  
Cmdlet          Add-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Add-ClusterResourceType                      FailoverClusters                                                  
Cmdlet          Add-ClusterScaleOutFileServerRole            FailoverClusters                                                  
Cmdlet          Add-ClusterServerRole                        FailoverClusters                                                  
Cmdlet          Add-ClusterSharedVolume                      FailoverClusters                                                  
Cmdlet          Add-ClusterVirtualMachineRole                FailoverClusters                                                  
Cmdlet          Add-ClusterVMMonitoredItem                   FailoverClusters                                                  
Cmdlet          Block-ClusterAccess                          FailoverClusters                                                  
Cmdlet          Clear-ClusterDiskReservation                 FailoverClusters                                                  
Cmdlet          Clear-ClusterNode                            FailoverClusters                                                  
Cmdlet          Disable-ClusterStorageSpacesDirect           FailoverClusters                                                  
Cmdlet          Enable-ClusterStorageSpacesDirect            FailoverClusters                                                  
Cmdlet          Get-Cluster                                  FailoverClusters                                                  
Cmdlet          Get-ClusterAccess                            FailoverClusters                                                  
Cmdlet          Get-ClusterAvailableDisk                     FailoverClusters                                                  
Cmdlet          Get-ClusterCheckpoint                        FailoverClusters                                                  
Cmdlet          Get-ClusterGroup                             FailoverClusters                                                  
Cmdlet          Get-ClusterLog                               FailoverClusters                                                  
Cmdlet          Get-ClusterNetwork                           FailoverClusters                                                  
Cmdlet          Get-ClusterNetworkInterface                  FailoverClusters                                                  
Cmdlet          Get-ClusterNode                              FailoverClusters                                                  
Cmdlet          Get-ClusterOwnerNode                         FailoverClusters                                                  
Cmdlet          Get-ClusterParameter                         FailoverClusters                                                  
Cmdlet          Get-ClusterQuorum                            FailoverClusters                                                  
Cmdlet          Get-ClusterResource                          FailoverClusters                                                  
Cmdlet          Get-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Get-ClusterResourceDependencyReport          FailoverClusters                                                  
Cmdlet          Get-ClusterResourceType                      FailoverClusters                                                  
Cmdlet          Get-ClusterSharedVolume                      FailoverClusters                                                  
Cmdlet          Get-ClusterSharedVolumeState                 FailoverClusters                                                  
Cmdlet          Get-ClusterVMMonitoredItem                   FailoverClusters                                                  
Cmdlet          Grant-ClusterAccess                          FailoverClusters                                                  
Cmdlet          Move-ClusterGroup                            FailoverClusters                                                  
Cmdlet          Move-ClusterResource                         FailoverClusters                                                  
Cmdlet          Move-ClusterSharedVolume                     FailoverClusters                                                  
Cmdlet          Move-ClusterVirtualMachineRole               FailoverClusters                                                  
Cmdlet          New-Cluster                                  FailoverClusters                                                  
Cmdlet          New-ClusterNameAccount                       FailoverClusters                                                  
Cmdlet          Remove-Cluster                               FailoverClusters                                                  
Cmdlet          Remove-ClusterAccess                         FailoverClusters                                                  
Cmdlet          Remove-ClusterCheckpoint                     FailoverClusters                                                  
Cmdlet          Remove-ClusterGroup                          FailoverClusters                                                  
Cmdlet          Remove-ClusterNode                           FailoverClusters                                                  
Cmdlet          Remove-ClusterResource                       FailoverClusters                                                  
Cmdlet          Remove-ClusterResourceDependency             FailoverClusters                                                  
Cmdlet          Remove-ClusterResourceType                   FailoverClusters                                                  
Cmdlet          Remove-ClusterSharedVolume                   FailoverClusters                                                  
Cmdlet          Remove-ClusterVMMonitoredItem                FailoverClusters                                                  
Cmdlet          Reset-ClusterVMMonitoredState                FailoverClusters                                                  
Cmdlet          Resume-ClusterNode                           FailoverClusters                                                  
Cmdlet          Resume-ClusterResource                       FailoverClusters                                                  
Cmdlet          Set-ClusterLog                               FailoverClusters                                                  
Cmdlet          Set-ClusterOwnerNode                         FailoverClusters                                                  
Cmdlet          Set-ClusterParameter                         FailoverClusters                                                  
Cmdlet          Set-ClusterQuorum                            FailoverClusters                                                  
Cmdlet          Set-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Start-Cluster                                FailoverClusters                                                  
Cmdlet          Start-ClusterGroup                           FailoverClusters                                                  
Cmdlet          Start-ClusterNode                            FailoverClusters                                                  
Cmdlet          Start-ClusterResource                        FailoverClusters                                                  
Cmdlet          Stop-Cluster                                 FailoverClusters                                                  
Cmdlet          Stop-ClusterGroup                            FailoverClusters                                                  
Cmdlet          Stop-ClusterNode                             FailoverClusters                                                  
Cmdlet          Stop-ClusterResource                         FailoverClusters                                                  
Cmdlet          Suspend-ClusterNode                          FailoverClusters                                                  
Cmdlet          Suspend-ClusterResource                      FailoverClusters                                                  
Cmdlet          Test-Cluster                                 FailoverClusters                                                  
Cmdlet          Test-ClusterResourceFailure                  FailoverClusters                                                  
Cmdlet          Update-ClusterFunctionalLevel                FailoverClusters                                                  
Cmdlet          Update-ClusterIPResource                     FailoverClusters                                                  
Cmdlet          Update-ClusterNetworkNameResource            FailoverClusters                                                  
Cmdlet          Update-ClusterVirtualMachineConfiguration    FailoverClusters                                                 


This is a long list but showing this list the GET commands are giving you instant results.

and Check this out Get-ClusterDiagnostics –Verbose 

It is like the Cluster Diagnostics and Verification Tool (ClusDiag.exe) but now alll is build in in a single powershell command.


The Get-ClusterDiagnostics runs a health test and zips in one file real nice for troubleshooting and for archive one set next to the Cluster Validation set.


In the zip file are all the event logs and Cluster Configuration there is also a list of all the configuration items with values. In this case the cluster has only one node so there is only one node displayed.


Quick list of the cluster configuration with all the settings that you can see with powershell Get-cluster | fl *


But is this the same as the cluster validation report ? No this is not the same and yes it may contain some of the same info but for troubleshooting Both could be very handy.

Things can get very complex with all the new stuff storage spaces direct / Replica / Cloud witness /etc  Especially when you create a non typical Cluster configuration witch is on my list to build the oddest cluster you have ever seen Winking smile 


Happy clustering

Robert Smit


Technorati Tags: Windows Azure,Azure File service,Windows,Server,Clustermvp,Blob,cloud witness

Posted September 28, 2015 by Robert Smit [MVP] in Windows Server 2016

Tagged with ,

How to Configure the File Share Witness or #Cloud Witness ,Windows Server #ws2003 #ws2008 #ws2012 #ws2016 #winserv   2 comments


The file share witness feature is an improvement to the current Majority Node Set (MNS) quorum model. This feature lets you use a file share that is external to the cluster as an additional "vote" to determine the status of the cluster in a two-node MNS quorum cluster deployment.
Consider a two-node MNS quorum cluster. Because an MNS quorum cluster can only run when the majority of the cluster nodes are available, a two-node MNS quorum cluster is unable to sustain the failure of any cluster node. This is because the majority of a two-node cluster is two. To sustain the failure of any one node in an MNS quorum cluster, you must have at least three devices that can be considered as available. The file share witness feature enables you to use an external file share as a witness. This witness acts as the third available device in a two-node MNS quorum cluster. Therefore, with this feature enabled, a two-node MNS quorum cluster can sustain the failure of a single cluster node.image

This is not new and you can configure this even on a windows server 2003. but did you know you can use Azure a cloud witness yes even for 2003. but it will not work out of the box. special handling is needed. and this keeps me thinking what code had windows server 2016 build in that can do this fun part.

Well lets take a look at the servers :

But if you are still using windows server 2003 you have way to much time : Windows Server 2003 support is ending July 14, 2015

But for this demo it will work Winking smile

I have a couple of clusters like in a museum 2003,2008 etc up to 2016.

Windows Server 2003 Windows Server 2003 support is ending July 14, 2015


Checking the Cluster Quorum Currently local.


Windows Server 2008

Earlier I create a blog post about creating a file share in Azure.

As Windows Server 2003 & 2008 are not in my scope anymore I will not go into depth on how to configure this. but you should look in the webDAV options.



But In Windows Server 2016 it is easy there is already an option in the Cluster manager to do this in the Azure Cloud.





This looks easy but you will need to create a storage account in Azure first and copy and past the Password.

Vote on my Idea to create all this directly in the FCM


More info about this :

And also you can use the Azure file share locally and or on other Clusters ( versions ) 

We need to make sure PowerShell and the new Azure File Share CmdLets are installed.  If you need to install PowerShell, you can install it from here.  Once PowerShell is installed, you need to install the CmdLets for Azure File Share here

The download is a ZIP-file ( that you should save und unpack to a local directory. Do not store the content in C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\ServiceManagement\Azure (i.e. the default directory of the Azure PowerShell installation), as this will result in some versioning issues. In our example, let’s say you will extract files to c:\AzureFiles.

Using the Azure File share





The File share can be used for several Scenarios

  • “Lift and Shift” applications

Azure Files makes it easier to “lift and shift” applications to the cloud that use on-premise file shares to share data between parts of the application. To make this happen, each VM connects to the file share (see “Getting Started” below) and then it can read and write files just like it would against an on-premise file share.

  • Shared Application Settings

A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances. These settings can also be managed via the REST interface, which allows worldwide access to the configuration files.

  • Diagnostic Share

An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.

  • Dev/Test/Debug

When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

Again this is just a preview Just be sure to understand the limitations of Azure Files the most important are:

  • 5TB per share
  • Max file size 1TB
  • Up to 1000 IOPS (of size 8KB) per share
  • Up to 60MB/s per share of data transfer for large IOs
  • SMB 2.1 support only

Here are the Links to the How to create a azure file share and build your desktop share

Build the Windows Server Cluster Azure Quorum Cloud Witness  in just a few Steps.

And yes you can build several configurations with the Azure File share Cloud Storage is there to use it. there is only one thing with the Cloud you will need an internet connection to your servers. unless you already use expressroute.


Happy clustering

Robert Smit


Posted September 28, 2015 by Robert Smit [MVP] in Windows Server 2016

Tagged with ,

What’s new in Windows Server 2016 Clustering and Storage overview #winserv   Leave a comment

What’s new in Windows Server 2016 well there are a lot of new features in the windows server 2016 in the next view blogs I’ll select an item on a how to use this new feature or how to use this.

On my blog there are already several items on Windows server 2016 on how to do Storage spaces direct or storage replica ,Containers. Or New Cluster powershell items. But there are always new items so first I’m going to redo all the new items on the Cluster Powershell items

What is change in Windows Server 2016 (10) cluster – Setting Cluster Common Properties #winserv

Below is a short list of all the new items that are in windows server 2016 and maybe not all items are directly usable in your environment it may be a nice to have thing so take a look at the new items.


  • Windows Server Containers: Windows Server 2016 Technical Preview now includes containers, which are an isolated, resource-controlled, and portable operating environment. They are an isolated place where an application can run without affecting the rest of the system or the system affecting the application. For some additional information on containers

  • What’s new in Active Directory Domain Services (AD DS) in Windows Server Technical Preview. Active Directory Domain Services includes improvements to help organizations secure Active Directory environments and provide better identity management experiences for both corporate and personal devices.

  • What’s New in Active Directory Federation Services. Active Directory Federation Services (AD FS) in Windows Server 2016 Technical Preview includes new features that enable you to configure AD FS to authenticate users stored in Lightweight Directory Access Protocol (LDAP) directories. .

  • What’s New in Failover Clustering in Windows Server Technical Preview. This topic explains the new and changed functionality of Failover Clustering. A Hyper-V or Scale-out File Server failover cluster can now easily be upgraded without any downtime or need to build a new cluster with nodes that are running Windows Server 2016 Technical Preview.

  • What’s new in Hyper-V in Technical Preview. This topic explains the new and changed functionality of the Hyper-V role in Windows Server 2016 Technical Preview, Client Hyper-V running on Windows 10, and Microsoft Hyper-V Server Technical Preview.

  • Windows Server Antimalware Overview for Windows Server Technical Preview. Windows Server Antimalware is installed and enabled by default in Windows Server 2016 Technical Preview, but the user interface for Windows Server Antimalware is not installed. However, Windows Server Antimalware will update antimalware definitions and protect the computer without the user interface. If you need the user interface for Windows Server Antimalware, you can install it after the operating system installation by using the Add Roles and Features Wizard.

  • What’s New in Remote Desktop Services in Windows Server 2016. For the Windows Server 2016 Technical Preview, the Remote Desktop Services team focused on improvements based on customer requests. We added support for OpenGL and OpenCL applications, and added MultiPoint Services as a new role in Windows Server.

  • What’s New in File and Storage Services in Windows Server Technical Preview. This topic explains the new and changed functionality of Storage Services. An update in storage quality of service now enables you to create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks on Hyper-V virtual machines. Storage Replica is a new feature that enables synchronous replication between servers for disaster recovery, as well as stretching of a failover cluster for high availability..

  • What’s New in Web Application Proxy in Windows Server Technical Preview. The latest version of Web Application Proxy focuses on new features that enable publishing and preauthentication for more applications and improved user experience. Check out the full list of new features that includes preauthentication for rich client apps such as Exchange ActiveSync and wildcard domains for easier publishing of SharePoint apps.


Cluster Operating System Rolling Upgrade

A new feature in Failover Clustering, Cluster Operating System Rolling Upgrade, enables an administrator to upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016 Technical Preview without stopping the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties against Service Level Agreements (SLA) can be avoided.

Storage Replica

Storage Replica (SR) is a new feature that enables storage-agnostic, block-level, synchronous replication between servers or clusters for disaster recovery, as well as stretching of a failover cluster between sites. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes to ensure zero data loss at the file-system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.

Cloud Witness

Cloud Witness is a new type of Failover Cluster quorum witness in Windows Server 2016 Technical Preview that leverages Microsoft Azure as the arbitration point. The Cloud Witness, like any other quorum witness, gets a vote and can participate in the quorum calculations. You can configure cloud witness as a quorum witness using the Configure a Cluster Quorum Wizard.


Virtual Machine Resiliency

Compute Resiliency Windows Server 2016 Technical Preview includes increased virtual machines compute resiliency to help reduce intra-cluster communication issues in your compute cluster


Diagnostic Improvements in Failover Clustering

To help diagnose issues with failover clusters, Windows Server 2016 Technical Preview includes the following:

  • Several enhancements to cluster log files (such as Time Zone Information and DiagnosticVerbose log) that makes is easier to troubleshoot failover clustering issues.

  • A new a dump type of Active memory dump, which filters out most memory pages allocated to virtual machines, and therefore makes the memory.dmp much smaller and easier to save or copy.


Site-aware Failover Clusters

Windows Server 2016 Technical Preview includes site- aware failover clusters that enable group nodes in stretched clusters based on their physical location (site). Cluster site-awareness enhances key operations during the cluster lifecycle such as failover behavior, placement policies, heartbeat between the nodes, and quorum behavior.

Workgroup and Multi-domain clusters

In Windows Server 2012 R2 and previous versions, a cluster can only be created between member nodes joined to the same domain. Windows Server 2016 Technical Preview breaks down these barriers and introduces the ability to create a Failover Cluster without Active Directory dependencies. You can now create failover clusters in the following configurations:

  • Single-domain Clusters. Clusters with all nodes joined to the same domain.

  • Multi-domain Clusters. Clusters with nodes which are members of different domains.

  • Workgroup Clusters. Clusters with nodes which are member servers / workgroup (not domain joined).

System Center Hashtags like #cloud

System Center: #sysctr
System Center App Controller: #appctrl
System Center Virtual Machine Manager: #vmm
System Center Service Manager: #scsm
System Center Operations Manager: #scom
System Center Data Protection Manager: #dpm
System Center Orchestrator: #sco
System Center Advisor: #scadvisor
System Center Configuration Manager: #configmgr
System Center Azure: #azure
System Center Windows Azure Pack: #wap

System Center All Up:

System Center – Configuration Manager Support Team blog:
System Center – Data Protection Manager Team blog:
System Center – Orchestrator Support Team blog:
System Center – Operations Manager Team blog:
System Center – Service Manager Team blog:

System Center – Virtual Machine Manager Team blog:

Windows Intune:

WSUS Support Team blog:

The AD RMS blog:

App-V Team blog:

MED-V Team blog:
Server App-V Team blog:

The Forefront Endpoint Protection blog :
The Forefront Identity Manager blog :
The Forefront TMG blog:
The Forefront UAG blog:


Happy clustering

Robert Smit


Posted September 21, 2015 by Robert Smit [MVP] in Windows Server 2016

Tagged with ,

  • Twitter

  • RSS Azure and Microsoft Windows Server Blog

  • %d bloggers like this: