#Azure Storage Spaces direct #S2D Standard Storage vs Premium Storage

I see this often in the Forums Should I use Standard Storage or should I use Premium storage. Well it Depends Premium cost more that Standard but even that depends in the basic. Can a $ 4000 Azure Storage space configuration  out perform a $ 1700 Premium configuration. this blog post is not on how to configure Storage spaces but more an overview on concepts, did I pick the right machine or did I build the right configuration well it all depends.

I love the HPC vm sizes https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-hpc but also expensive.

So in these setups I did create a storage space direct configuration all almost basic. but Key is here pick the Right VM for the job.

Standard 6 node cluster 4 core 8GB memory total disks 96 Type S30 (1TB) RAW disk space 96TB  and 32TB for the vDisk

Premium 3 node Cluster 2 core 16GB memory Total disks 9 Type P30 (1TB) RAW disk space 9TB  and 3TB for the vDisk

Standard A8 (RDMA) 5 node cluster 8 core 56GB memory total disks 80 Type p20 (500GB) RAW disk space 40TB

So basically comparing both configs makes no sense Couse  both configs are different. bigger machines vs little VM

and a lot less storage.

Standard Storage storage vs Premium

The performance of standard disks varies with the VM size to which the disk is attached, not to the size of the disk.


So the nodes have 16 disk each 16 * 500 IOPS  and with a max bandwidth of 480 Mbps. that could be a issue as would I use the full GB network than I need atleast  125 MB/s


In the Premium it is all great building the same config as in the standard the cost would be $3300 vs $12000. If you have a solution and you need the specifications then this is the way to go.

Can I out perform the configuration with standard disks ? In an old blog post I did the performance test on a 5 node A8 machine and 16 premium storage P20- 500GB 40TB RAW and got a network throughput of 4.2Gbps 



Measurements are different on different machines and basically there is no one size fits all it all depends on the workload or config or needs.

using the script from (by Mikael Nystrom, Microsoft MVP) on the basic disk not very impressive list  high latency but that’s the Standard storage.


The premium Storage is way faster and constant. So when using Azure and you need an amount of load or VM’s there is so much choice if you pick a different machine the results can be better. when hitting the IOPS ceiling of the VM. Prepare some calculations when building your new solution.  Test some configurations first before you go in production.

Azure is changing everyday today this may be the best solution but outdated tomorrow.

Below are some useful links on the Machine type and storage type.





Thanks for reading my blog. Did you check my other blog post about Azure File Sync : https://robertsmit.wordpress.com/2017/09/28/step-by-step-azure-file-sync-on-premises-file-servers-to-azure-files-storage-sync-service-afs-cloud-msignite/



Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Deploying Storage Spaces Direct with VMM 2016 or with Powershell #Cloud #hyperconverged #SysCtr #S2D

Windows Server 2016 comes with al lot of new options and Hyper-converged is one of the new options. In this blog post I’ll show you what options you have when using VMM and S2D. The tools are great but so is PowerShell and it always depends on what and how you are building things.

Storage Spaces Direct is a bit like building a Do It Your Self San multiple heads lots of Storage can lose one Head , low costs.

Storage Spaces Direct seamlessly integrates with the Hyper-V / Files Servers you know today. The Windows Server 2016 software defined storage stack, including Clustered Shared Volume File System (CSVFS), Storage Spaces and Failover Clustering.

The hyper-converged deployment scenario has the Hyper-V servers and Storage Spaces Direct components on the same cluster. Virtual machine’s files are stored on local CSVs. This allows for scaling Hyper-V clusters together with the storage it is using. Once Storage Spaces Direct is configured (Enable-ClusterS2D) and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster. but now with System Center Virtual Machine Manager 2016 we can also configure this during the deployment.

Hyper-Converged Stack

Above are the layers shown, as you can see the Storage is defined in 3 parts physical disks, spaces and the CSV volumes.

So basically we can configure the cluster with Storage Spaces Direct by hand (PowerShell) or if you are using VMM you can do this by using templates and the GUI. but is this the same and is this handy ? The only change I did in this post is create a Scale out file server to use the Storage Spaces Direct volumes.

Well it is nice that you can do this but when configuring this by hand it gives you much more flexibility and configuration and yes maybe more complex but understanding the method is better than following a wizard.

Let see the options we have in VMM there are a couple of ways to configure this it all depends.


Create a Hyper-V cluster and tap the enable Storage Spaces Direct option.



Or Create a Scale Out file server and check what you want shared Storage or enable Storage Spaces Direct option.

But you can also Create the cluster in VMM and configure later the Storage Spaces Direct. The fact is that VMM 2016 can create and maintain the Storage layer. all from a single interface.

So for this demo I use 4 Servers Sofs02,Sofs04,Sofs06,Sofs08 each server has 8 local Disks



These 4 servers will be transformed to a Storage Space Direct Cluster

first let me check of all the disks on the server.

Get-PhysicalDisk | ? CanPool -EQ 1 | FT FriendlyName, BusType, MediaType, Size


Storage Spaces Direct uses BusType and MediaType to automatically configure caching, storage pool and storage tiering. In Hyper-V virtual machines, the media type is reported as unspecified. So if you are using tools that are expecting certain types of disk you need to fix this.

else when running the cluster validation the cluster creation will fail.

Found a disk with unsupported media type on node ‘Sofs02.mvp.local’. Supported media types are SSD and HDD.


Step one is creating a Hyper-v cluster.



As my servers are in the Storage VMM host group I’ll pick this group. Give the cluster a name and Check the Storage Spaces Direct check box.

So typical when creating this by hand you would do this in PowerShell

install-WindowsFeature "Failover-Clustering","RSAT-Clustering" -IncludeAllSubFeature –ComputerName “sofs02”,”sofs04”,”sofs06”,”sofs08”

Test-Cluster -Node “sofs02”,”sofs04”,”sofs06”,”sofs08”

New-Cluster –Name Democlu201 -Node “sofs02”,”sofs04”,”sofs06”,”sofs08”  -NoStorage -StaticAddress ""

Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 –SkipEligibilityChecks  ( as you are running VHDX disks )

The big difference here is you can’t customize this cluster during this step so no Quorum or any other settings.


Selecting all the nodes


Giving the Cluster a Fix IP or pick one random from the IP pool


All the tasks are running an in a few minutes we have a Cluster that holds a Storage Space Direct unless it Fails the cluster validation test.

If you are using the S2D you must run the Cluster validation test and remember only SSD and HDD media type Disks are supported. So if the media type is unspecified or unknown the Validation report will fail and so is this job.


In our case the job went successful and the cluster with Storage Space Direct is ready for usage.


Now that the cluster is ready you can use the Storage after creating the pool.

And if you already have build a hyper-converged  Cluster Hyper-V servers and Storage Spaces Direct components then you can us this also in VMM.


Now that the Cluster is added we can create a Pool.

In case you build the Storage Spaces Direct with Powershell you end up with something like this :

#Create storage pool 
New-StoragePool  -StorageSubSystemName Pool01.mvp.local –FriendlyName Pool01 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -Name Pool01.mvp.local| Get-PhysicalDisk)

#list Storage pool

Get-StoragePool Pool01

#removal of the Storagepool 
Remove-StoragePool –Name Pool01.mvp.local

But when using the VMM Gui tool you will not get the friendly name as when you do this in Powershell



But this is easy changable

TO check if the Cluster Storage Spaces Direct is enabled you can run a PowerShell command


Or check your Cluster under Storage en Enclosures Every server is listed as his own enclosure.


Now that the Enclosures are listed We create the pools and the disks


We select the Clustered Pool and do manage to create the Virtual disk



We create a New Pool and if you not created a Classification you will need to do this to.

Give this a name and Pick the disk that you want I select all the Disk and use them for one big Pool.


Now that we have selected all the disk and created the pool we can create a Virtual disk on the Pool


Creating the disk can be a little confusing in the VMM GUI as you need to press Cancel and OK.


Give the disk a name


Pick the right Size as my pool s 168GB and I can only do a Mirror you understand I can’t create a 160Gb disk, I have 4 nodes press ca


How can this guide help you? You can use this guide and the Software-Defined Storage Design Calculator spreadsheet to design a storage solution that uses the Storage Spaces and Scale-Out File Server functionality of Windows Server 2012 R2 along with cost-effective servers and shared serial-attached SCSI (SAS) storage enclosures.

#Create virtual disks
New-Volume –StoragePoolFriendlyName Pool01 -FriendlyName CSV02 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS –Size 48GB

As you can see I created a Scale out file server and used the Storage Spaces Direct as storage.


#create Cluster
New-StorageFileServer –StorageSubSystemName DemoClu201.mvp.local –FriendlyName Demosofs201 -HostName Demosofs201.mvp.local -Protocols SMB

#Create file shares and Folders 
md C:\ClusterStorage\Volume1\shares\VM01

New-SmbShare –Name VM01 -Path C:\ClusterStorage\Volume1\shares\VM01 -FullAccess "mvp\Domain Admins"


Now that the File share and SOFS is in place we can add the share to the hyper-v server or cluster for usage.


When creating a VM we can use the Storage Spaces Direct to place a VM but as you can see in this post there are several methods to do things and each option has a different choice the right one well it is all on you and it depends. see this table below with the pros and cons.

Storage Spaces deployment tools




Failover Cluster Manager & Server Manager

  • Easy to use

  • Slow automatic refreshes in Server Manager when working with storage

  • Some tasks require Windows PowerShell

  • No automation can make provisioning more than a couple virtual disks and file shares tedious

System Center Virtual Machine Manager

  • Easy to use

  • Partial automation of cluster deployment

  • Automated management of file share permissions

  • Can be used to deploy and manage VMs

  • Some tasks require Windows PowerShell (including storage tiers)

  • Requires System Center licenses

  • Might require additional infrastructure if you don’t already have System Center or are deploying at a scale that’s greater than your existing deployment can handle

Microsoft Deployment Toolkit

  • Lots of control over operating system installation options

  • Can be used to deploy other PCs and servers

  • Can be complex

  • Some approaches require System Center Configuration Manager licenses

Windows PowerShell

  • Complete control over all aspects of storage

  • Can automate by writing scripts

  • Requires knowledge of Windows PowerShell

  • Scripts require development and testing

After writing this post I can see If you should do this then I would use PowerShell to build the cluster and Storage Spaces direct and add them to VMM but for deploying the basics VMM could be very handy but it all depends on your infra structure.

The VMM option is really great but for me it takes to long to do stuff and often the job fails because I made a typo or the naming is not the way I want it. And the usage of the Storage Spaces Well the Hyper-Converged option vs the Converged option it has it challenges and it all depends on the hardware you have. but for my testlab or in Azure S2D runs great.

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

System Center 2016 VMM Place template VM in Custom OU #sysctr #Cloud #Deploy #VM

when using VMM and deploying templates you not always want to place them in the default OU computers


But instead you want the Template Server 2016 places in OU TP5 and Hyper-V server directly placed in the OU Hyper-v.

Default there is no Gui item in the VMM console to do this. Say on the domain join tab place this VM in the Hyper-V OU


Instead of this you need to fill in the Value in Powershell. and Make a custom OU field.


You can Add Custom Properties as you like.

But first we are creating a Custom Guest OS profile this profile is the basis for the new build template and the Custom OU Placement.


Now that the Custom OS profile is in place we can check it there is a domain OU field



this shows us the field that we must fill in to get the right OU placement.

Get-SCGuestOSProfile |select Name


Get-SCGuestOSProfile -name "Guest OS 2016TP5"

Setting this in the OS profile

Get-SCGuestOSProfile -name "Guest OS 2016TP5" |Set-SCGuestOSProfile -DomainJoinOrganizationalUnit "OU=SCVMM16,DC=MVP,DC=local"


Now when I create a new template with this OS profile the VM is place in the SCVMM16 OU but it is not anywhere visible in the GUI.

and what if I have already build templates how to place them in Custom OU.

Yes you can do this. First I select all the templates to pick the right one

Get-SCVMTemplate |select name



$template = Get-SCVMTemplate | where {$_.Name -eq "ws2016G2"}
$template |select name


As I made the OU a variable :

$ou = "OU=SCVMM16,DC=MVP,DC=local"

Set-SCVMTemplate -VMTemplate $template -DomainJoinOrganizationalUnit $ou



So now the Template has a custom OU also.

But still there is no GUI property to show this. therefore go to the Template and create a Custom Property


go to the Manage custom Properties


Select Virtual Machine Template Properties give it a name “ custom OU “ and assign this to the template


Now that tis is assigned we can enable this in the GUI


But before we get any value in this field we need match this with the PowerShell Value DomainJoinOrganizationalUnit


Get-SCVMTemplate | %{ Set-SCCustomPropertyValue -InputObject $_ -CustomProperty $(Get-SCCustomProperty -Name "Custom OU") -Value $_.DomainJoinOrganizationalUnit }



As you can see there is an error this is because one template has no value.

image image


Now With new deployments the VM’s will be places in the Custom OU




Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Using Windows Storage Spaces direct with hyper converged in Microsoft Azure with Windows Server 2016

Sometimes you need some fast machines and a lot of IOPS. SSD is the way to go there but what if your site is in Azure ?

Well build a high performance Storage space is Azure. Remember this setup will cost you some money or burn your MSDN credits is just one run.  

My setup is using several storage account and a setup of a 5 node cluster with a cloud witness and each node has 16 disk.


As the setup is based on Storage spaces direct I build a 5 node cluster. Some options are not needed but for my demo I need them in case you thought he why did he install this or that.

So building the Cluster

get-WindowsFeature Failover-Clustering
install-WindowsFeature "Failover-Clustering","RSAT-Clustering","File-Services", "Failover-Clustering","RSAT-Clustering -IncludeAllSubFeature –ComputerName "rsmowanode01.AZUTFS.local"

I add the other nodes later.

#Create cluster validation report
Test-Cluster -Node "rsmowanode01.AZUTFS.local “
New-Cluster -Name Owadays01 -Node "rsmowanode01.AZUTFS.local" -NoStorage -StaticAddress ""

Now that my cluster is ready I added some disk to the VM’s and place them in several storage accounts. ( you can expand the default just make a Azure helpdesk request )

I have currently   image not all needed but you will never know.

imageAs I prep all my Azure VM’s in PowerShell here is an example on how to add the disk to the azure VM. As I need 16 disk for 5 nodes that are 80 disk with a 500 GB size 40 TB raw disks.





The powershell Sample command to create the disks.

Get-AzureVM -Name $vmname -ServiceName $vmname |
    Add-AzureDataDisk -CreateNew -DiskSizeInGB 500 -DiskLabel ‘datadisk0’ -LUN 0 -HostCaching None | 










Now that the Cluster is ready and the disk are mounted to the Azure VM’s it is time for some magic

With the : Get-Disk | Where FriendlyName -eq ‘Msft Virtual Disk’|Initialize-Disk -PartitionStyle GPT -PassThru

all disk are online I do not need to format them as the disk are getting pooled


As every node gets his own storage enclosure

To enable the Storage space direct option you will need this to enable


what you just did is making the local disk turn in to usable cluster disk.



to create a basic Storage pool

New-StoragePool  -StorageSubSystemName Owadays01.AZUTFS.local -FriendlyName OwadaysSP01 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -friendlyname "Clustered Windows Storage on Owadays01" | Get-PhysicalDisk)


|Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "IODisk" -AllocationUnitSize 65536 -Confirm:$false



#Query the number of disk devices available for the storage pool
(Get-StorageSubSystem  -Name Owadays01.AZUTFS.local | Get-PhysicalDisk).Count



Mirror storage spaces

Mirroring refers to creating two or more copies of data and storing them in separate places, so that if one copy gets lost the other is still available. Mirror spaces use this concept to become resilient to one or two disk failures, depending on the configuration.

Take, for example, a two-column two-way mirror space. Mirror spaces add a layer of data copies below the stripe, which means that one column, two-way mirror space duplicates each individual column’s data onto two disks.

Assume 512 KB of data are written to the storage space. For the first stripe of data in this example (A1), Storage Spaces writes 256 KB of data to the first column, which is written in duplicate to the first two disks. For the second stripe of data (A2), Storage Spaces writes 256 KB of data to the second column, which is written in duplicate to the next two disks. The column-to-disk correlation of a two-way mirror is 1:2; for a three-way mirror, the correlation is 1:3.

Reads on mirror spaces are very fast, since the mirror not only benefits from the stripe, but also from having 2 copies of data. The requested data can be read from either set of disks. If disks 1 and 3 are busy servicing another request, the needed data can be read from disks 2 and 4.

Mirrors, while being fast on reads and resilient to a single disk failure (in a two-way mirror), have to complete two write operations for every bit of data that is written. One write occurs for the original data and a second to the other side of the mirror (disk 2 and 4 in the above example). In other words, a two-way mirror requires 2 TB of physical storage for 1 TB of usable capacity, since two data copies are stored. In a three-way mirror, two copies of the original data are kept, thus making the storage space resilient to two disk failures, but only yielding one third of the total physical capacity as useable storage capacity. If a disk fails, the storage space remains online but with reduced or eliminated resiliency. If a new physical disk is added or a hot-spare is present, the mirror regenerates its resiliency.

Note: Your storage account is limited to a total request rate of up to 20,000 IOPs. You can add up to 100 storage accounts to your Azure subscription. A storage account design that is very application- or workload-centric is highly recommended. In other words, as a best practice, you probably don’t want to mix a large number of data disks for storage-intensive applications within the same storage account. Note that the performance profile for a single data disk is 500 IOPs. Consider this when designing your overall storage layout.



Now that the storage pools are in place we can do some measurements on the Speed creating disk and iops. based on Refs and NTFS

these disk I’m using for the Scale out file server

New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskREFS14 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS –Size 2000GB


New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskNTFS15 -PhysicalDiskRedundancy 1 -FileSystem NTFS –Size 20GB



With some disk changes and creation you can say the REFS with clustered shared volume is about 100x as fast!


Now that we have Cluster Storage I’m using this for the SOFS.

#create the SOFS 
New-StorageFileServer -StorageSubSystemName Tech-SOFS.AZUTFS.local -FriendlyName Tech-SOFS -HostName Tech-SOFS -Protocols SMB



Adding the disk and the next test is ready.


First we make a couple a disk on the REFS share 



so a 1TB disk creation is not much slower than a 100GB file remember these are fixed files.

When I do this on the NTFS volume and create a 100GB fixed disk this took forever after 10 Min I stopped the command. this is why you always do a quick format on a ntfs disk.


A 1Gb disk creation is a better test as you can see this is around 8 times slower with a 1000x smaller disk.



Let test IOPS for this I use the DISKSPD tool : Diskspd Utility: A Robust Storage Testing Tool (superseding SQLIO)



So the disk creation is way way faster and when using this in a Hyper-v deployment the VM creation is way way faster en also the copy of files.


image image

I did only READ test ! If you want also the Write test use –w1  the –b is the block size

Testing on REFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat


When using a little 10 sec burst we got high rates but this is not the goal.

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat


Testing On NTFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-NTFS01\testfil1e.dat





So basically you get much more IOPS then on a normal single disk but it all depends on block size configuration and storage usage normal or premium.

The main thing is if you want fast iops and machines it can be done in Azure it Will cost you but it is also expensive on premise.

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b4K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

and with several runs you can get some nice results


but the the config I used is around the $30. total per hour

A8 and A9 virtual machines feature Intel® Xeon® E5 processors. Adds a 32 Gbit/s InfiniBand network with remote direct memory access (RDMA) technology. Ideal for Message Passing Interface (MPI) applications, high-performance clusters, modeling and simulations, video encoding, and other compute or network intensive scenarios.

A8-A11 sizes are faster than D-series




Robert Smit

Cloud and Datacenter MVP ( Expertise:  High Available )

What’s new in Windows Server 2016 Failover Cluster overview Get-ClusterDiagnostics Enable-ClusterStorageSpacesDirect #winserv #windowsserver2016

A while a go I created a blog post about all the new properties in Windows Server 2016 Clustering.

Well now that we are close the the RTM version a lot of things has changed and naming is different so time for a refresh with a new twist.

When I created this blog https://robertsmit.wordpress.com/2014/12/02/what-is-change-in-windows-server-2015-10-cluster-setting-cluster-common-properties-winserv/


New options for the Storage Spaces Direct are in place https://robertsmit.wordpress.com/2015/05/18/whatif-hybrid-storage-spaces-direct-s2d-and-storage-replication-sr-azure-windows-server-2016-mvpvconf-ws2016-mvpbuzz/

There is now a powershell command for this so no need for Dasmode=1

Disable-ClusterStorageSpacesDirect  Or  Enable-ClusterStorageSpacesDirect  


And a lot of new Options are there in the Cluster in the next post I’ll dig them up and show the options.

But what If we check the Powershell Commands.

Get-Command -Module failoverclusters

PS C:\Windows\system32> Get-Command -Module failoverclusters

CommandType     Name                                               Version    Source                                                            
———–     —-                                               ——-    ——                                                            
Alias           Add-VMToCluster                              FailoverClusters                                                  
Alias           Disable-ClusterS2D                           FailoverClusters                                                  
Alias           Enable-ClusterS2D                            FailoverClusters                                                  
Alias           Remove-VMFromCluster                         FailoverClusters                                                  
Function        Get-ClusterDiagnostics                       FailoverClusters                                                  
Cmdlet          Add-ClusterCheckpoint                        FailoverClusters                                                  
Cmdlet          Add-ClusterDisk                              FailoverClusters                                                  
Cmdlet          Add-ClusterFileServerRole                    FailoverClusters                                                  
Cmdlet          Add-ClusterGenericApplicationRole            FailoverClusters                                                  
Cmdlet          Add-ClusterGenericScriptRole                 FailoverClusters                                                  
Cmdlet          Add-ClusterGenericServiceRole                FailoverClusters                                                  
Cmdlet          Add-ClusterGroup                             FailoverClusters                                                  
Cmdlet          Add-ClusteriSCSITargetServerRole             FailoverClusters                                                  
Cmdlet          Add-ClusterNode                              FailoverClusters                                                  
Cmdlet          Add-ClusterPrintServerRole                   FailoverClusters                                                  
Cmdlet          Add-ClusterResource                          FailoverClusters                                                  
Cmdlet          Add-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Add-ClusterResourceType                      FailoverClusters                                                  
Cmdlet          Add-ClusterScaleOutFileServerRole            FailoverClusters                                                  
Cmdlet          Add-ClusterServerRole                        FailoverClusters                                                  
Cmdlet          Add-ClusterSharedVolume                      FailoverClusters                                                  
Cmdlet          Add-ClusterVirtualMachineRole                FailoverClusters                                                  
Cmdlet          Add-ClusterVMMonitoredItem                   FailoverClusters                                                  
Cmdlet          Block-ClusterAccess                          FailoverClusters                                                  
Cmdlet          Clear-ClusterDiskReservation                 FailoverClusters                                                  
Cmdlet          Clear-ClusterNode                            FailoverClusters                                                  
Cmdlet          Disable-ClusterStorageSpacesDirect           FailoverClusters                                                  
Cmdlet          Enable-ClusterStorageSpacesDirect            FailoverClusters                                                  
Cmdlet          Get-Cluster                                  FailoverClusters                                                  
Cmdlet          Get-ClusterAccess                            FailoverClusters                                                  
Cmdlet          Get-ClusterAvailableDisk                     FailoverClusters                                                  
Cmdlet          Get-ClusterCheckpoint                        FailoverClusters                                                  
Cmdlet          Get-ClusterGroup                             FailoverClusters                                                  
Cmdlet          Get-ClusterLog                               FailoverClusters                                                  
Cmdlet          Get-ClusterNetwork                           FailoverClusters                                                  
Cmdlet          Get-ClusterNetworkInterface                  FailoverClusters                                                  
Cmdlet          Get-ClusterNode                              FailoverClusters                                                  
Cmdlet          Get-ClusterOwnerNode                         FailoverClusters                                                  
Cmdlet          Get-ClusterParameter                         FailoverClusters                                                  
Cmdlet          Get-ClusterQuorum                            FailoverClusters                                                  
Cmdlet          Get-ClusterResource                          FailoverClusters                                                  
Cmdlet          Get-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Get-ClusterResourceDependencyReport          FailoverClusters                                                  
Cmdlet          Get-ClusterResourceType                      FailoverClusters                                                  
Cmdlet          Get-ClusterSharedVolume                      FailoverClusters                                                  
Cmdlet          Get-ClusterSharedVolumeState                 FailoverClusters                                                  
Cmdlet          Get-ClusterVMMonitoredItem                   FailoverClusters                                                  
Cmdlet          Grant-ClusterAccess                          FailoverClusters                                                  
Cmdlet          Move-ClusterGroup                            FailoverClusters                                                  
Cmdlet          Move-ClusterResource                         FailoverClusters                                                  
Cmdlet          Move-ClusterSharedVolume                     FailoverClusters                                                  
Cmdlet          Move-ClusterVirtualMachineRole               FailoverClusters                                                  
Cmdlet          New-Cluster                                  FailoverClusters                                                  
Cmdlet          New-ClusterNameAccount                       FailoverClusters                                                  
Cmdlet          Remove-Cluster                               FailoverClusters                                                  
Cmdlet          Remove-ClusterAccess                         FailoverClusters                                                  
Cmdlet          Remove-ClusterCheckpoint                     FailoverClusters                                                  
Cmdlet          Remove-ClusterGroup                          FailoverClusters                                                  
Cmdlet          Remove-ClusterNode                           FailoverClusters                                                  
Cmdlet          Remove-ClusterResource                       FailoverClusters                                                  
Cmdlet          Remove-ClusterResourceDependency             FailoverClusters                                                  
Cmdlet          Remove-ClusterResourceType                   FailoverClusters                                                  
Cmdlet          Remove-ClusterSharedVolume                   FailoverClusters                                                  
Cmdlet          Remove-ClusterVMMonitoredItem                FailoverClusters                                                  
Cmdlet          Reset-ClusterVMMonitoredState                FailoverClusters                                                  
Cmdlet          Resume-ClusterNode                           FailoverClusters                                                  
Cmdlet          Resume-ClusterResource                       FailoverClusters                                                  
Cmdlet          Set-ClusterLog                               FailoverClusters                                                  
Cmdlet          Set-ClusterOwnerNode                         FailoverClusters                                                  
Cmdlet          Set-ClusterParameter                         FailoverClusters                                                  
Cmdlet          Set-ClusterQuorum                            FailoverClusters                                                  
Cmdlet          Set-ClusterResourceDependency                FailoverClusters                                                  
Cmdlet          Start-Cluster                                FailoverClusters                                                  
Cmdlet          Start-ClusterGroup                           FailoverClusters                                                  
Cmdlet          Start-ClusterNode                            FailoverClusters                                                  
Cmdlet          Start-ClusterResource                        FailoverClusters                                                  
Cmdlet          Stop-Cluster                                 FailoverClusters                                                  
Cmdlet          Stop-ClusterGroup                            FailoverClusters                                                  
Cmdlet          Stop-ClusterNode                             FailoverClusters                                                  
Cmdlet          Stop-ClusterResource                         FailoverClusters                                                  
Cmdlet          Suspend-ClusterNode                          FailoverClusters                                                  
Cmdlet          Suspend-ClusterResource                      FailoverClusters                                                  
Cmdlet          Test-Cluster                                 FailoverClusters                                                  
Cmdlet          Test-ClusterResourceFailure                  FailoverClusters                                                  
Cmdlet          Update-ClusterFunctionalLevel                FailoverClusters                                                  
Cmdlet          Update-ClusterIPResource                     FailoverClusters                                                  
Cmdlet          Update-ClusterNetworkNameResource            FailoverClusters                                                  
Cmdlet          Update-ClusterVirtualMachineConfiguration    FailoverClusters                                                 


This is a long list but showing this list the GET commands are giving you instant results.

and Check this out Get-ClusterDiagnostics –Verbose 

It is like the Cluster Diagnostics and Verification Tool (ClusDiag.exe) but now alll is build in in a single powershell command.


The Get-ClusterDiagnostics runs a health test and zips in one file real nice for troubleshooting and for archive one set next to the Cluster Validation set.


In the zip file are all the event logs and Cluster Configuration there is also a list of all the configuration items with values. In this case the cluster has only one node so there is only one node displayed.


Quick list of the cluster configuration with all the settings that you can see with powershell Get-cluster | fl *


But is this the same as the cluster validation report ? No this is not the same and yes it may contain some of the same info but for troubleshooting Both could be very handy.

Things can get very complex with all the new stuff storage spaces direct / Replica / Cloud witness /etc  Especially when you create a non typical Cluster configuration witch is on my list to build the oddest cluster you have ever seen Winking smile 


Happy clustering

Robert Smit



Technorati Tags: Windows Azure,Azure File service,Windows,Server,Clustermvp,Blob,cloud witness

How to Configure the File Share Witness or #Cloud Witness ,Windows Server #ws2003 #ws2008 #ws2012 #ws2016 #winserv


The file share witness feature is an improvement to the current Majority Node Set (MNS) quorum model. This feature lets you use a file share that is external to the cluster as an additional "vote" to determine the status of the cluster in a two-node MNS quorum cluster deployment.
Consider a two-node MNS quorum cluster. Because an MNS quorum cluster can only run when the majority of the cluster nodes are available, a two-node MNS quorum cluster is unable to sustain the failure of any cluster node. This is because the majority of a two-node cluster is two. To sustain the failure of any one node in an MNS quorum cluster, you must have at least three devices that can be considered as available. The file share witness feature enables you to use an external file share as a witness. This witness acts as the third available device in a two-node MNS quorum cluster. Therefore, with this feature enabled, a two-node MNS quorum cluster can sustain the failure of a single cluster node.image

This is not new and you can configure this even on a windows server 2003. but did you know you can use Azure a cloud witness yes even for 2003. but it will not work out of the box. special handling is needed. and this keeps me thinking what code had windows server 2016 build in that can do this fun part.

Well lets take a look at the servers :

But if you are still using windows server 2003 you have way to much time : Windows Server 2003 support is ending July 14, 2015

But for this demo it will work Winking smile

I have a couple of clusters like in a museum 2003,2008 etc up to 2016.

Windows Server 2003 Windows Server 2003 support is ending July 14, 2015


Checking the Cluster Quorum Currently local.


Windows Server 2008

Earlier I create a blog post about creating a file share in Azure.


As Windows Server 2003 & 2008 are not in my scope anymore I will not go into depth on how to configure this. but you should look in the webDAV options.



But In Windows Server 2016 it is easy there is already an option in the Cluster manager to do this in the Azure Cloud.





This looks easy but you will need to create a storage account in Azure first and copy and past the Password.

Vote on my Idea to create all this directly in the FCM



More info about this :


And also you can use the Azure file share locally and or on other Clusters ( versions ) 

We need to make sure PowerShell and the new Azure File Share CmdLets are installed.  If you need to install PowerShell, you can install it from here.  Once PowerShell is installed, you need to install the CmdLets for Azure File Share here

The download is a ZIP-file (AzureStorageFile.zip) that you should save und unpack to a local directory. Do not store the content in C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\ServiceManagement\Azure (i.e. the default directory of the Azure PowerShell installation), as this will result in some versioning issues. In our example, let’s say you will extract files to c:\AzureFiles.

Using the Azure File share







The File share can be used for several Scenarios

  • “Lift and Shift” applications

Azure Files makes it easier to “lift and shift” applications to the cloud that use on-premise file shares to share data between parts of the application. To make this happen, each VM connects to the file share (see “Getting Started” below) and then it can read and write files just like it would against an on-premise file share.

  • Shared Application Settings

A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances. These settings can also be managed via the REST interface, which allows worldwide access to the configuration files.

  • Diagnostic Share

An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.

  • Dev/Test/Debug

When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

Again this is just a preview Just be sure to understand the limitations of Azure Files the most important are:

  • 5TB per share
  • Max file size 1TB
  • Up to 1000 IOPS (of size 8KB) per share
  • Up to 60MB/s per share of data transfer for large IOs
  • SMB 2.1 support only

Here are the Links to the How to create a azure file share and build your desktop share


Build the Windows Server Cluster Azure Quorum Cloud Witness  in just a few Steps.


And yes you can build several configurations with the Azure File share Cloud Storage is there to use it. there is only one thing with the Cloud you will need an internet connection to your servers. unless you already use expressroute.


Happy clustering

Robert Smit



What’s new in Windows Server 2016 Clustering and Storage overview #winserv

What’s new in Windows Server 2016 well there are a lot of new features in the windows server 2016 in the next view blogs I’ll select an item on a how to use this new feature or how to use this.

On my blog there are already several items on Windows server 2016 on how to do Storage spaces direct or storage replica ,Containers. Or New Cluster powershell items. But there are always new items so first I’m going to redo all the new items on the Cluster Powershell items

What is change in Windows Server 2016 (10) cluster – Setting Cluster Common Properties #winserv

Below is a short list of all the new items that are in windows server 2016 and maybe not all items are directly usable in your environment it may be a nice to have thing so take a look at the new items.


  • Windows Server Containers: Windows Server 2016 Technical Preview now includes containers, which are an isolated, resource-controlled, and portable operating environment. They are an isolated place where an application can run without affecting the rest of the system or the system affecting the application. For some additional information on containers

  • What’s new in Active Directory Domain Services (AD DS) in Windows Server Technical Preview. Active Directory Domain Services includes improvements to help organizations secure Active Directory environments and provide better identity management experiences for both corporate and personal devices.

  • What’s New in Active Directory Federation Services. Active Directory Federation Services (AD FS) in Windows Server 2016 Technical Preview includes new features that enable you to configure AD FS to authenticate users stored in Lightweight Directory Access Protocol (LDAP) directories. .

  • What’s New in Failover Clustering in Windows Server Technical Preview. This topic explains the new and changed functionality of Failover Clustering. A Hyper-V or Scale-out File Server failover cluster can now easily be upgraded without any downtime or need to build a new cluster with nodes that are running Windows Server 2016 Technical Preview.

  • What’s new in Hyper-V in Technical Preview. This topic explains the new and changed functionality of the Hyper-V role in Windows Server 2016 Technical Preview, Client Hyper-V running on Windows 10, and Microsoft Hyper-V Server Technical Preview.

  • Windows Server Antimalware Overview for Windows Server Technical Preview. Windows Server Antimalware is installed and enabled by default in Windows Server 2016 Technical Preview, but the user interface for Windows Server Antimalware is not installed. However, Windows Server Antimalware will update antimalware definitions and protect the computer without the user interface. If you need the user interface for Windows Server Antimalware, you can install it after the operating system installation by using the Add Roles and Features Wizard.

  • What’s New in Remote Desktop Services in Windows Server 2016. For the Windows Server 2016 Technical Preview, the Remote Desktop Services team focused on improvements based on customer requests. We added support for OpenGL and OpenCL applications, and added MultiPoint Services as a new role in Windows Server.

  • What’s New in File and Storage Services in Windows Server Technical Preview. This topic explains the new and changed functionality of Storage Services. An update in storage quality of service now enables you to create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks on Hyper-V virtual machines. Storage Replica is a new feature that enables synchronous replication between servers for disaster recovery, as well as stretching of a failover cluster for high availability..

  • What’s New in Web Application Proxy in Windows Server Technical Preview. The latest version of Web Application Proxy focuses on new features that enable publishing and preauthentication for more applications and improved user experience. Check out the full list of new features that includes preauthentication for rich client apps such as Exchange ActiveSync and wildcard domains for easier publishing of SharePoint apps.


Cluster Operating System Rolling Upgrade

A new feature in Failover Clustering, Cluster Operating System Rolling Upgrade, enables an administrator to upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016 Technical Preview without stopping the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties against Service Level Agreements (SLA) can be avoided.

Storage Replica

Storage Replica (SR) is a new feature that enables storage-agnostic, block-level, synchronous replication between servers or clusters for disaster recovery, as well as stretching of a failover cluster between sites. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes to ensure zero data loss at the file-system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.

Cloud Witness

Cloud Witness is a new type of Failover Cluster quorum witness in Windows Server 2016 Technical Preview that leverages Microsoft Azure as the arbitration point. The Cloud Witness, like any other quorum witness, gets a vote and can participate in the quorum calculations. You can configure cloud witness as a quorum witness using the Configure a Cluster Quorum Wizard.


Virtual Machine Resiliency

Compute Resiliency Windows Server 2016 Technical Preview includes increased virtual machines compute resiliency to help reduce intra-cluster communication issues in your compute cluster


Diagnostic Improvements in Failover Clustering

To help diagnose issues with failover clusters, Windows Server 2016 Technical Preview includes the following:

  • Several enhancements to cluster log files (such as Time Zone Information and DiagnosticVerbose log) that makes is easier to troubleshoot failover clustering issues.

  • A new a dump type of Active memory dump, which filters out most memory pages allocated to virtual machines, and therefore makes the memory.dmp much smaller and easier to save or copy.


Site-aware Failover Clusters

Windows Server 2016 Technical Preview includes site- aware failover clusters that enable group nodes in stretched clusters based on their physical location (site). Cluster site-awareness enhances key operations during the cluster lifecycle such as failover behavior, placement policies, heartbeat between the nodes, and quorum behavior.

Workgroup and Multi-domain clusters

In Windows Server 2012 R2 and previous versions, a cluster can only be created between member nodes joined to the same domain. Windows Server 2016 Technical Preview breaks down these barriers and introduces the ability to create a Failover Cluster without Active Directory dependencies. You can now create failover clusters in the following configurations:

  • Single-domain Clusters. Clusters with all nodes joined to the same domain.

  • Multi-domain Clusters. Clusters with nodes which are members of different domains.

  • Workgroup Clusters. Clusters with nodes which are member servers / workgroup (not domain joined).

System Center Hashtags like #cloud

System Center: #sysctr
System Center App Controller: #appctrl
System Center Virtual Machine Manager: #vmm
System Center Service Manager: #scsm
System Center Operations Manager: #scom
System Center Data Protection Manager: #dpm
System Center Orchestrator: #sco
System Center Advisor: #scadvisor
System Center Configuration Manager: #configmgr
System Center Azure: #azure
System Center Windows Azure Pack: #wap

System Center All Up: http://blogs.technet.com/b/systemcenter/

System Center – Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
System Center – Data Protection Manager Team blog: http://blogs.technet.com/dpm/
System Center – Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
System Center – Operations Manager Team blog: http://blogs.technet.com/momteam/
System Center – Service Manager Team blog: http://blogs.technet.com/b/servicemanager

System Center – Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Windows Intune: http://blogs.technet.com/b/windowsintune/

WSUS Support Team blog: http://blogs.technet.com/sus/

The AD RMS blog: http://blogs.technet.com/b/rmssupp/

App-V Team blog: http://blogs.technet.com/appv/

MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/


Happy clustering

Robert Smit



Windows Server 2016 With Clustered SQL Server 2016 Instance #winserv #SQL #WS2016 #SQL2016

Now that Windows Server 2016 is there and SQL server 2016. you can build a default cluster with SQL but that is no fun. Building a SQL Cluster on Storage Spaces Direct or putting the Databases on storage replication.  or building a hybrid SQL cluster with Azure.

In my blog there are plenty off samples on how to build this. This blog is a easy step by step on hoe to create a cluster with the 2016 products.  ( basically the same as 2008,2012R2 )

Get your Windows server 2016 herehttps://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview

What’s new in Windows Server 2016 ?


the Cluster is easy I use 3 lines of PowerShell to create this. needless to say the .net 3.5 is needed for SQL!

Windows Server 2016 is there and SQL server 2016

Installing the Cluster feature you will need to reboot this node.
install-WindowsFeature "Failover-Clustering","RSAT-Clustering" -IncludeAllSubFeature -ComputerName "mvpsql16-1.mvp.local","mvpsql16-1.mvp.local"
Test-Cluster -Node "mvpsql16-1.mvp.local","mvpsql16-1.mvp.local"
New-Cluster -Name Techdays01 -Node "mvpsql16-1.mvp.local","mvpsql16-1.mvp.local" -NoStorage -StaticAddress ""

The Cluster is in place and yes you could do create a new failover cluster but that is to easy. command line is way faster with the advanced config.

Windows Server 2016 is there and SQL server 2016


Let’s pick the two steps installation in the advanced menu. cluster prep and cluster completion.


first step is advanced cluster preparation. when installing from command line you will need a ini file during the GUI setup you can save this ini file and use this for later or you can use an older file non 2016 if you just want to setup the DB.

Windows Server 2016 is there and SQL server 2016

If you are using named instances you can see this error if you are not using the right names


SQL named instance Requirements  : https://msdn.microsoft.com/en-us/library/ms143531.aspx


Step 1


WHen using the Command line with the INI file you can do this :

Remember to change the ini file and set the ignore to the UIMODE="Normal"



In just a few minutes you have completed the whole setup.  ( 5 minute Setup Windows Server 2012R2


Step 1 is done still there is nothing to see in the Cluster but the first part of SQL install is done.




Creating the Cluster with the advanced Cluster completion.


Important that you did run the cluster validation. else the setup will fail.



Using the named instance and using the Created ini file in step 2 if you haven’t had this file.



image image

Done in just a few minutes and ready for the next Cluster rollout using the unattended files just change the names and IP and or locations in the INI files and you are ready to go.



Done for Part 1 the cluster basics.

Happy clustering

Robert Smit



Building #USB SOFS with Storage Spaces Direct #S2D #WS2016

I like the option Storage Spaces. As you can use a simple disk to play with thin provisioning and show he I got a 480 TB Disk ( USB ) that is huge.

But this is no fun. I’m not saying this is Supported! play at own risk.


This is fun for demo’s but I was thinking about a fun blog post this could be it. At least to get things working.

I used my old USB disk and thought has anyone build a USB Scale-out File Server ? guess not Let me bing that for you  http://tinyurl.com/lprxqsf

Playing with the New Cluster Options. as shown in an old blog post : what is new in Server 2016

But looking at just those 4 Options to Enable the Storage Spaces Direct that is no fun but what about the other options.

Storage Space Direct

With a little help of this STORAGE_BUS_TYPE enumeration on MSDN we could do Fun things with some old disks.


So enabling the options I start Building My Scaleout File Server with My USB thumbdrive Storage.


With the DASmodeEnabled and the change of the bustype the disk are online



Creating my Storage Space Direct


Storage Space Direct

Got My three disks online in my Cluster Enclosure


Created a Disk and made this disk a CSV Winking smile To bad I could not thin provisioning this disk. So a max of 700 GB is there.

Storage Space Direct

Just a screen shot of my Cluster with Storage Space Direct

Storage Space Direct

Se how Fun new technology can be play and learn.


Download Windows Server Technical Preview evaluations:

Happy clustering

Robert Smit



whatif Hybrid Storage Spaces Direct #S2D and Storage Replication #SR #Azure Windows Server 2016 #windowsserver2016

As you know in Windows Server you can use the local storage in you cluster and you can also replicate storage between two Servers.

But what if we combine these two options ? seems logical and with some advanced config it could work. ok but what if I use Azure for this and better I use a Hybrid config using a Cluster that has also a leg in a on premise  will this work ? I don’t think it is supported but cluster validation passed on this so it must be supported. with a *

Ok what Do I need for this :

Azure Subscription – Check

Azure Site – to – site VPN  – Check

on-premise Cluster – Check  6 node Cluster

Azure Cluster nodes – Check 2 nodes running in Azure.

Fast Internet line – check

I’m not showing you all the details else it would be a very long blog post and I have already posted on how you can build your Replica and How to use the Storage Spaces Direct Combine them with those Two options.



My Setup is 4 Cluster nodes On Hyper-v on premise and Two node in Azure all running All Running Windows Server 2016.

and basically what I did is Building a cluster with Storage Spaces Direct based on 3 local disk and on top of this I created a 2 Disks that I used for replication.


as you can see I have 26 disk in Node 1 Different Sizes and Shared and non shared disks these are running on my Hyper-v 2012R2 Server


and with the Storage Spaces Direct option my Cluster Would look like this.  Hybrid Cluster with all the Best options in Windows Server 2016


My Storage Pools One is running in Azure and On is on premis As for the Replication all replication disk needs to be at the same size.


Sizing difficult in azure So I created first the Azure disk to see what size they are and after that I created the On-premise disks


This is really nice all native Windows Server 2016. the only thing you need is a fast internet line. and Currently the limit is the access to Azure ( if you don’t have Expressroute. )


Seting up the Replication is easy with powershell

New-SRPartnership -SourceComputerName win2015-1 -SourceRGName Azure_group01 -SourceVolumeName u: -SourceLogVolumeName v: -DestinationComputerName win2015-6 -DestinationRGName Azure_group02 -DestinationVolumeName p: -DestinationLogVolumeName Q: -LogSizeInBytes 1gb


My Replication with Storage Spaces Direct in a hybrid Configuration. as you can see with this more configurations are possible and if you make sure the line latency is below 50ms things could work just fine.


But building this is fun but imaging you need to trouble shoot this where are my disk and what is failing things are getting complex and even with the hybrid model extending your Datacenter to Azure is a bit closer.


With todays Fast internet it is easy to build this and hybrid solutions are easy to build especially with windows server 2016 build in Replica and local Storage for clustering. extending your lab or production to Azure and you can demonstrate the high available solutions today. but keep in mind building an troubleshooting can be a pain as environments are getting complex.


Checkout the MVP V-Conf Session

Deploying Highly Available SQL Server in Microsoft Azure IaaS



Download Windows Server Technical Preview evaluations:

Happy clustering

Robert Smit



#improve Windows Server 2016 #UserVoice page #S2D #ws2016 #nano #linux #CPS #cloud #powershell #preview

Windows Server has a new UserVoice page: http://windowsserver.uservoice.com/forums/295047-general-feedback with subsections:

General Feedback

Do you have an idea or suggestion based on your experience with Windows Server? We would love to hear it! Please take a few minutes to submit your idea in the one of the forums available on the right or vote up an idea submitted by another Windows Server customer. All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Windows Server. Suggestions can apply to both released and Technical Preview versions of Windows Server.

This forum (General Feedback) is used for any broad feedback related to Windows Server. If you have feedback on a specific aspect of Windows Server, for example Storage, Networking, Virtualization, Nano Server, etc., please submit your feedback in one of the forums available on the right.

If you are looking to provide feedback on Automation (PowerShell and Scripting) please provide your suggestions using our PowerShell Connect Site.

Remember that this site is only for feature suggestions and ideas!

If you have technical questions or need help with Windows Server, please visit our TechNet Forums.

To see our work in progress, please go ahead and install the Windows Server Technical Preview. More information on what’s new in the Technical Preview can be found here. You can join the conversation about the Technical Preview and swap advice with others at Technical Preview Forums.

Clustering: http://windowsserver.uservoice.com/forums/295074-clustering

Storage: http://windowsserver.uservoice.com/forums/295056-storage

Virtualization: http://windowsserver.uservoice.com/forums/295050-virtualization

Networking: http://windowsserver.uservoice.com/forums/295059-networking

Nano Server: http://windowsserver.uservoice.com/forums/295068-nano-server

Linux Support: http://windowsserver.uservoice.com/forums/295062-linux-support


At the heart of the Microsoft Cloud Platform, Windows Server brings Microsoft’s experience delivering global-scale cloud services into your infrastructure. Windows Server Technical Preview 2 provides a wide range of new and enhanced features and capabilities spanning server virtualization, storage, software-defined networking, server management and automation, web and application platform, access and information protection, virtual desktop infrastructure, and more.

As a reminder, these are early pre-release builds. Many of the features and scenarios are still in development. As such, these builds are not intended for production environments, labs, nor full evaluations. This is pre-released software; features and functionality may differ in the final release.

Need more information about the next version of Windows Server? See what’s new in Windows Server Technical Preview 2.

Download for Windows Server 2016 Technical Preview 2 (TP2) is here: http://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview


Happy clustering

Robert Smit

follow me : @clusterMVP


MVP Profile : http://mvp.microsoft.com

Virtual Hard Disk Sharing ( shared VHDX ) Storage Pool Usage in Failover Clustering Windows Server 2012 R2


Storage Spaces: Benefits and Limitations :
  • Obtain and easily manage reliable and scalable storage with reduced cost
  • Aggregate individual drives into storage pools that are managed as a single entity
  • Utilize simple inexpensive storage with or without external storage
  • Provision storage as needed from pools of storage you’ve created
  • Grow storage pools on demand
  • Use PowerShell to manage Storage Spaces for Windows 8 clients or Windows Server 2012
  • Delegate administration by specific pool
  • Use diverse types of storage in the same pool: SATA, SAS, USB, SCSI
  • Use existing tools for backup/restore as well as VSS for snapshots
  • Designate specific drives as hot spares
  • Automatic repair for pools containing hot spares with sufficient storage capacity to cover what was lost
  • Management can be local, remote, through MMC, or PowerShell
  • Not supported on boot, system, or CSV volumes
  • Drives must be 4GB or larger
  • When you introduce a drive into a storage pool, the contents of the drive being added will be lost.
  • Add only un-formatted/un-partitioned drives
  • A simple storage pool must consist of at least one drive
  • A mirrored pool must have at least 2 drives.  For 3-way mirroring there is an obvious need for more
  • Three drive minimum for using Parity
  • All drives in a pool must use the same sector size
  • Fibre-channel and iSCSI are not supported
  • Storage must be storport.sys compatible
  • Virtual disks to be used with a failover cluster that emanate from a storage pool must use the NTFS file system.  ReFS or third-party file systems may be used for other purposes

How does Storage Spaces work ?

The volumes you create within a storage pool are basically virtual disks located on the storage pool that you may then partition, format, and assign drive letters as applicable.  Storage Spaces maintains the health of these drives and any redundancy selected.  Storage Spaces stores metadata on every volume within the storage pool that defines how data will be stored within the pool.

Primordial Pool ?

All storage that meets acceptable criteria for Storage Spaces will be placed in the Primordial Pool.  This can be considered the default pool for devices from which any other pools will be created.

  • Storage pools. A collection of physical disks that enable you to aggregate disks, expand capacity in a flexible manner, and delegate administration.
  • Storage spaces. Virtual disks created from free space in a storage pool. Storage spaces have such attributes as resiliency level, storage tiers, fixed provisioning, and precise administrative control.

Storage Spaces Overview : http://technet.microsoft.com/en-us/library/hh831739.aspx


In this Blog I create a Storage pool with shared VHDX files so there is no support for this in real life. But you can play with it and use this in your demo’s

Storage pools can be very useful and highly scalable but In my opinion you should not use a witness disk in a storage pool. The reason is if a storage pool is down , what ever the reason is there is no witness disk you should make sure that the disk is always there or use a file share witness.

Again my demo cluster is the base of this. Just to show you how easy you can handle things in windows server 2012 R2

First I create 10 Disks that I will use in this demo. 

1..10 | % { New-VHD -Path m:\shareData$_.VHDX -Fixed -SizeBytes 4GB } Be aware the minimal disk size is 4 GB

Then I add the Disks to my VM’s You can change the 1..10 to you value just as the names.

1..10 | % { $p = "m:\shareData" + $_ + ".VHDX" ; 10..66 | % { $v = "Demo" + $_; Write-Host $v, $p; Add-VMHardDiskDrive -VMName $v -Path $p -ShareVirtualDisk } }

Now that the Disks are added to the VM’s I can use them. The adding can be done even if the VM is running so no down time there.

If you logon to the VM and do a Get-Physicaldisk you will see the -canpool option is true this means the disk can be used for storage pools.image image

Next I will create a storage pool You can do this by powershell or by Gui.

New-VirtualDisk -FriendlyName DemoSpace1 -StoragePoolFriendlyName DemoPool1 -ResiliencySettingName Mirror –Size 1GB

2..7 | % { New-VirtualDisk -FriendlyName DemoSpace$_ -StoragePoolFriendlyName DemoPool1 -ResiliencySettingName Mirror –Size 1GB }

I created 7 virtual disks and I will use 6 disk for CSV and one disk for Files server See my other blog post

64 Node Sharing Virtual Hard Disk (shared VHDX) in Failover Clustering Windows Server 2012 R2 #Winserv #ITPro – http://tinyurl.com/qh5rvnj


clip_image002_thumb image

Next step is creating and Initialize the disk  , If you check properties on the virtual disk you will see the configuration


1..7 | % { $Letter ="MVPLNTU”[($_-1)]

$Number = (Get-VirtualDisk -FriendlyName Space$_ | Get-Disk).Number

Set-Disk -Number $Number -IsReadOnly 0

Set-Disk -Number $Number -IsOffline 0

Initialize-Disk -Number $Number -PartitionStyle MBR

New-Partition -DiskNumber $Number -DriveLetter $Letter -UseMaximumSize

Initialize-Volume -DriveLetter $Letter -FileSystem NTFS -Confirm:$false }


Now the Pool is ready for usage , I create a few shares on the Cluster Shared Volumes and set some rights.

1..4 | % { MD C:\ClusterStorage\Volume$_\DemoShare

New-SmbShare -Name DemoShare$_ -Path C:\ClusterStorage\Volume$_\DemoShare -FullAccess mvp.local\Administrator

Set-SmbPathAcl -ShareName DemoShare$_ }

image image

Storage pools can be created in different ways I Created a Mirror but  there are different layouts Two-way mirror, Three-way mirror, and Parity

What are the resiliency levels provided by Enclosure Awareness?

Storage Space Configuration
All Configurations are enclosure aware
Enclosure or JBOD Count / Failure Coverage
Three JBOD
2-way Mirror
1 Disk
1 Enclosure
1 Enclosure
  3-way Mirror
2 disk
1 Enclosure + 1 Disk
1 Enclosure + 1 Disk
  Dual Parity
2 disk
2 disk
1 Enclosure + 1 Disk


There are great links to teched and teched wiki if you want to know more about storage spaces Parity space support for failover clusters

Storage Spaces Frequently Asked Questions (FAQ)