Azure Stack POCFabricInstaller failed because the following tasks failed: EnableRemotePS #AzureStack #Azure #MAS   Leave a comment

Currently I’m testing several MAS POC deployments. With this I deployed my Servers With VMM ready for MAS.

But there are some issues during the deployment.

POCFabricInstaller failed because the following tasks failed: EnableRemotePS

 

clip_image001

Winrm is configured / installer is full admin  so why ?

As the Azure Stack is running in verbose mode I get some info in a lot of log files

clip_image002

As the Deployment failed here

Microsoft Azure Stack POC Deployment
7 out of 124 task(s) completed
[ooooooo

Running
Microsoft Azure Stack POC Fabric Installer
Running Task(s): 1, Completed Task(s): 7, Total Tasks: 52
[ooooooooooooooo

 

So time to view the log files for the cause of this error. I know it is not the WinRM config

When checking the log files I find the following info :

 

System.Management.Automation.RemoteException
Cannot stop service ‘Windows Remote Management (WS-Management) (WinRM)’
because it has dependent services. It can only be stopped if the Force flag is
set.
Job fail due state: Failed

 

So some dependency is stopping my WinRM service.  when looking at the Service and dependencies I saw this :

 

clip_image005

ah the VMM agent is causing this error during the Azure Stack deployment.

VERBOSE: Importing function ‘Start-AzureStackDeploymentScheduledTask’.
Report-Progress : The Microsoft Azure Stack POC deployment failed.
Start-PocFabricInstallerTasks : POCFabricInstaller failed because the
following tasks failed: EnableRemotePS
At C:\ProgramData\Microsoft\AzureStack\Deployment\RunAzureStackDeploymentTask.p
s1:158 char:19
+ … $result = & "Start-$moduleName`Tasks" -StatusUpdatedCallback {
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorExcep
tion
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorExceptio
n,Start-PocFabricInstallerTasks

At C:\ProgramData\Microsoft\AzureStack\Deployment\Get-AzureStackDeploymentStatus.ps1:107 char:15
+ $Result = Report-Progress($status)
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Report-Progress

clip_image004

Uninstalling the VMM agent and kicked the Azure Stack deployment PowerShell script again and it worked smoothly

clip_image006

The Logs Can be found here :

C:\ProgramData\Microsoft\AzureStack\Logs\AzureStackFabricInstaller

Happy Stacking

Robert Smit

Twitter: @clustermvp

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted February 3, 2016 by Robert Smit [MVP] in AzureStack

Tagged with

Azure Stack deployment tweaking #azurestack #azure #deployment   Leave a comment

After posting my former blog I got some questions on how to deploy the Stack and where to tweak this.

Well it is easy if you know the PDT kit.

Basically if you extracted the files you will have this folder

image

 

in the MicrosoftAzureStackPOC.vhdx are all the scripts during the deployment this disk will be mounted as source So changing files you will need to mount this disk and edit the files before you do the deployment.

 image

First let us edit the Disk types. To make sure in this example the script will work I do only a find / replace no code added. ( you can )

After mounting the disk we edit the Invoke-AzureStackDeploymentPrecheck.ps1 file

image

in this case I added the “ file backed Virtual” for using local VHD files.

image

Or change the Memory check set this to 32 GB if you want or to 8 <> Remember this is only to pass the validation changing this could fail the installation.

image

Or if one Nic is no option in your config.

image

That’s All

now in the E:\AzureStackInstaller\PoCFabricInstaller folder there is the PoCFabricSettings.xml

 

image

In this PoCFabricSettings.xml are all the settings CPU / Memory / Naming you can all change this here but remember it could fail you installation handle with care.

image

I must say the scripts are great but not a lot of flexibility it takes some testing just to make sure it all worked. I played on a HP blade G9 with SSD’s so running the setup doesn’t take that long but still playing with this kills the day. there is no 10 minute fix troubleshooting takes time.

checkout this forum link to support others.

https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=AzureStack

 

Happy Stacking

Robert Smit

Twitter: @clustermvp

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted February 2, 2016 by Robert Smit [MVP] in AzureStack

Tagged with

First Errors in Azure Stack Deployment #MAS #AzureStack #Azure #MASCUG Microsoft Azure Stack POC is ready to deploy   Leave a comment

Playing with the AzureStack deployment is no picknick there is a Pre-check but you must have the perfect machine to deploy the Azure Stack Bits. So I tweaked the scripts a bit. this is the logical architecture of the Azure Stack POC and its components. all running on a single Hyper-v host.

But even then something can go wrong. As shown below some errors are highlighted.

Check disks failed. At least 3 disks of the same bus type (RAID/SAS/SATA) and with CanPool attribute equals true are
required.   ( I added some other Storage also )

Cannot bind argument to parameter ‘PackagePath’ because it is an empty string  ( used a variable setting before running the script )

image

Welcome to the Microsoft Azure Stack POC Deployment!
There are several prerequisites checks to verify that your machine meets all the minimum requirements for deploying Microsoft A
zure Stack POC.
All of the prerequisite checks passed.
Please enter the password for the built-in administrator. The password must meet the Azure Active Directory password complexity
requirements.
Password: **********
Confirm password: **********
Setup system admin account
Please sign in to your Azure account in the Microsoft Azure sign in window.
Press any key to continue …

 

image

But after Concurring All the Pre-Requirements you are ready to go Or not ?

During the Deployment I had this error :  Method "NewTriggerByStartup" not found
This seams an updated Powershell module is in place and I have a bug in my build. After some digging in the Powershell modules I managed to fix this.

image

Microsoft Azure Stack POC is ready to deploy. Continue?
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y
New-ScheduledTaskTrigger : Method "NewTriggerByStartup" not found
At F:\AzureStackInstaller\PoCDeployment\AzureStackDeploymentScheduledTask.psm1:27 char:16
+     $trigger = New-ScheduledTaskTrigger -AtStartup
+                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (PS_ScheduledTask:Root/Microsoft/…S_ScheduledTask) [New-ScheduledTaskTrigger]
   , CimException
    + FullyQualifiedErrorId : HRESULT 0x80041002,New-ScheduledTaskTrigger

 

The real fix is this mofcomp C:\Windows\System32\wbem\SchedProv.mof

and after Some digging I find already a uservoice post on this issue. So vote for this

image

I must say the scripts are awesome lots of handy stuff in there.  I did the Next Next Finish setup to see what I could expect and how to Build this in a not default environment. SO no single server.

That’s All For now.

Greetings,

Robert Smit

Twitter: @clustermvp

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted February 1, 2016 by Robert Smit [MVP] in AzureStack

Tagged with ,

Windows Server cluster issues moving cluster resources cno object is gone #winserv #cluster #cloud #fail #cno #migrate   Leave a comment

Suppose you have a nice cluster and one day your fellow IT guys comes and say he lets move all the Cluster Resources to a specific cluster node.

This seams a normal step but wait there is more there is also a Cluster resource that needs to be moved. “The CNO object”

image

The Cluster Resources up and running

image

Always hard to find where to move the cluster resource components. but it is Labeled “Move Core Cluster Resource”  easy

image

But what if my IT guy can’t find this option ? mmm in a normal roll you can do “ assign to another role” this sounds ok move the CNO to another role. eh ?? why is this in there ? well it is there so let me use this option and see what happened.

So let me move this CNO to node 2.

image

Done het Joe I’m ready all the resources are to node 2.

image

All fine all the resources are over. ok let me failover to the other node. And place this back to the original place. Well this sounds easy but where are all the options.

 

imageimage

Eh.. what ? let me reboot the server and the cluster, checking for updates…. call for help..

The Cloud IT pro comes back and looks at the CCR objects Gone but the cluster is still running and tons of Scom errors Cluster is down CNO etc

 image Ok but where are my Cluster objects ?

Let me do some PowerShell 

Get-ClusterGroup

image

Oh ok all the Resources are there and up. but why can’t I move the resource back in the GUI ? well I guess Microsoft keep you away from the Cluster Core Resources maybe you will break the cluster.

image

Now that we have the Cluster Resource groups and cluster Groups we can move back the CNO object to the right place.

$CLU=get-cluster
Move-ClusterResource -Cluster $clu  -Name "Cluster IP Address" -Group "Cluster Group"

image

The Cluster objects need to be online!! Else you get an error. just bring the resources online and try again.

image

So next time don’t move the CNO object to a Cluster Resource.  And this is why there are Cluster Admins Winking smile

 

 

Greetings,

Robert Smit

Twitter: @clustermvp

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted January 25, 2016 by Robert Smit [MVP] in Windows Server 2016

Tagged with

Azure StorSimple Manager the on-premises StorSimple Virtual Array image #storsimple #azure #cloud #backup #msft #MVP   Leave a comment

 

The new StorSimple 8000 series hybrid storage arrays are the most powerful StorSimple systems ever and have even tighter integration with Azure, including two new Azure-based capabilities to enable new use cases and centralize data management.

The on-premises StorSimple Virtual Array, for all customers with an Enterprise Agreement for Microsoft Azure. The StorSimple Virtual Array is a version of the StorSimple solution in a virtual machine form installed on your existing hypervisors. The virtual array is built on the success of previous StorSimple technology using a hybrid cloud storage approach for on-demand capacity scaling in the cloud and cloud-based data protection and disaster recovery.

The virtual array can be run as a virtual machine on your Hyper-V or VMware ESXi hypervisors and can be configured as a File Server (NAS) or as an iSCSI server. The hybrid approach is to store the most used data (hottest) local on the virtual array and (optionally) tiering older stale data to Azure. The virtual array also provides the ability to back up the data to Azure in addition to having a quick disaster recovery (DR) capability.

Architecture

The Virtual Array is now also available  on-premise lets see to to configure this and how to play with this.

The virtual array can be run as a virtual machine on your Hyper-V or VMware ESXi hypervisors and can be configured as a File Server (NAS) or as an iSCSI server. The hybrid approach is to store the most used data (hottest) local on the virtual array and (optionally) tiering older stale data to Azure. The virtual array also provides the ability to back up the data to Azure in addition to having a quick disaster recovery (DR) capability.

Each virtual array can manage up to 64 TB of data in the cloud. Virtual arrays, in different branch and remote offices across geographies, can be managed from a central StorSimple management portal in Azure.

image

Your StorSimple Manager has been created!

Download on-premises virtual device image

Image for Hyper-V 2008 R2 and above

 

Now that we have the Image we create a VM on my Hyper-V server

image

You must make sure that the underlying hardware (host system) on which you are creating the virtual device is able to dedicate the following resources to your virtual device:

  • A minimum of 4 cores.
  • At least 8 GB of RAM.
  • One network interface.
  • A 500 GB virtual disk for system data.

image

Logon with the default password

image

The auto config is shown and to manage the device go to the local IP

image

in this case https://YourIP

image

Now that we are connected to the device we need to configure the device with the 5 steps.

image

The on premise device needs to be registered in the Azure portal. In the Azure portal is the registration ID and this ID needs to be copied in the Device.

image

In the local web browser you can copy the registration ID

image

To get the other key go to the devices in Azure and get the Key at the bottom is the second key if this is your first device in this subscription.

 

image

 

image

Placing the Keys and register the device it will do a reboot and you have your own Storsimple.

If this is the first device that you are registering with your StorSimple Manager service, a Service data encryption key will appear. Copy this key and save it in a safe location. This key will be required with the service registration key to register additional devices with the StorSimple Manager service. If this is not the first device that you are registering with this service, then you will need to provide the service data encryption key (that you saved during the registration of the first device).

 

image

 

My device is configured and domain joined

image

Going to the Azure portal you can see the on premise device.

image

With just a few more steps we have the appliance ready for use, just drill in to the device and the two steps are there to guide you.

 

image

 

imageimage

Specify a storage account to be used with your device. You can select an existing storage account in this subscription from the dropdown list or specify Add more to choose an account from a different subscription.
Define the encryption settings for all the data that will be sent to the cloud. To encrypt your data, check the combo box to enable cloud storage encryption key.

Enter a cloud storage encryption that contains 32 characters.  Keep in mind if you loose this key there is no way that you can access this backup again. Not even microsoft is gona fix this!

image

image

 

The next step is add a share to the device

 

image

 

Select a usage type for the share.

The usage type can be Tiered or Locally pinned, with tiered being the default. For workloads that require local guarantees, low latencies, and higher performance, select a Locally pinned share. For all other data, select a Tiered share.

A locally pinned share is thickly provisioned and ensures that the primary data on the share stays local to the device and does not spill to the cloud.

A tiered share on the other hand is thinly provisioned and can be created very quickly. When you create a tiered share, 10% of the space is provisioned on the local tier and 90% of the space is provisioned in the cloud. For instance, if you provisioned a 1 TB volume, 100 GB would reside in the local space and 900 GB would be used in the cloud when the data tiers.

This in turn implies that if you run out of all the local space on the device, you cannot provision a tiered share.

Specify the provisioned capacity for your share. Note that the specified capacity should be smaller than the available capacity. If using a tiered share, the share size should be between 500 GB and 20 TB. For a locally pinned share, specify a share size between 50 GB and 2 TB. Use the available capacity as a guide to provision a share. If the available local capacity is 0 GB, then you will not be allowed to provision local or tiered shares.

imageimage

During this creation I had some errors so I did create a Second device with More Storage Winking smile

The thing was the Disk would not come online. so I did do some testing and playing but at the end I got tons of Ideas on what if but for this… #fubar.

image

So I created several shares on the Device

 

imageimage

 

Testing the shares in my domain and yes it is working.

 

imageimage

A quick overview of my shares from the file server. You can also build your Storsimple ISCSI device

 

image

A quick overview of the two storsimple devices

image

In the Azure storsimple Maintenance tab you can scan the device for a software update. this comes in two phases download and installing 

Update downloading

imageimageimage

Now that the updates are downloaded we can update the device

imageimage

If anything goes wrong you can access the diagnostic logs from the local device

 

image image

all windows and storsimple logs are there in just one zip file.

image

Think we can create new options to get the most out of Azure. Suppose I add this in Azure Pack #wapack or add this in the Azure stack #mas.

Stay tuned I’ll show you as the Azure playground gets better more Azure credits are spend in this environment.

I’ll do some troubleshooting/performance testing the next blogs staytuned

Greetings,

Robert Smit

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted January 11, 2016 by Robert Smit [MVP] in StorSimple

Tagged with

Update hyper-converged in Microsoft Azure Performance testing   Leave a comment

As an update on the previous blog post on Using Windows Storage Spaces direct with hyper converged in Microsoft Azure with Windows Server 2016

I did only test the performance on Read and not on write.

image

and the disk that I created are all with no host cache so we need to change this for all the 80 disk in the VM as currently I have a max of 10K write IOPS.

So there is a limit on 4 disks to set the Cache.!

image

 

image

 

With this I set on all the 5 nodes 4 disk with cache

image

But after some testing the results are basically the same

but the write latency’s are way to high to get optimal results that is with standard disk and with read-write cache

image But it all depends on what test dis I run and how deep it the test going. block size

image 

After lots of runs and I got great results from the read but not much more write IOPS than 15K Only on the local D drive (SSD) I got 35K IOPS on write Winking smile

Conclusion when building Storage Spaces and you do not only need fast read but  also fast write you better create different pools and when using Azure use the local disk for write or use premium disk currently My Azure credits are gone but my next test will be same config but then with a Premium SSD disk.

Posted January 9, 2016 by Robert Smit [MVP] in Azure

Tagged with

Using Windows Storage Spaces direct with hyper converged in Microsoft Azure with Windows Server 2016   1 comment

Sometimes you need some fast machines and a lot of IOPS. SSD is the way to go there but what if your site is in Azure ?

Well build a high performance Storage space is Azure. Remember this setup will cost you some money or burn your MSDN credits is just one run.  

My setup is using several storage account and a setup of a 5 node cluster with a cloud witness and each node has 16 disk.

image

As the setup is based on Storage spaces direct I build a 5 node cluster. Some options are not needed but for my demo I need them in case you thought he why did he install this or that.

So building the Cluster

get-WindowsFeature Failover-Clustering
install-WindowsFeature "Failover-Clustering","RSAT-Clustering","File-Services", "Failover-Clustering","RSAT-Clustering -IncludeAllSubFeature –ComputerName "rsmowanode01.AZUTFS.local"

I add the other nodes later.

#Create cluster validation report
Test-Cluster -Node "rsmowanode01.AZUTFS.local “
New-Cluster -Name Owadays01 -Node "rsmowanode01.AZUTFS.local" -NoStorage -StaticAddress "10.0.0.20"

Now that my cluster is ready I added some disk to the VM’s and place them in several storage accounts. ( you can expand the default just make a Azure helpdesk request )

I have currently   image not all needed but you will never know.

imageAs I prep all my Azure VM’s in PowerShell here is an example on how to add the disk to the azure VM. As I need 16 disk for 5 nodes that are 80 disk with a 500 GB size 40 TB raw disks.

 

 

 

 

The powershell Sample command to create the disks.

Get-AzureVM -Name $vmname -ServiceName $vmname |
    Add-AzureDataDisk -CreateNew -DiskSizeInGB 500 -DiskLabel ‘datadisk0’ -LUN 0 -HostCaching None | 
    Update-AzureVM

 

 

 

 

 

 

 

 

 

Now that the Cluster is ready and the disk are mounted to the Azure VM’s it is time for some magic

With the : Get-Disk | Where FriendlyName -eq ‘Msft Virtual Disk’|Initialize-Disk -PartitionStyle GPT -PassThru

all disk are online I do not need to format them as the disk are getting pooled

image

As every node gets his own storage enclosure

To enable the Storage space direct option you will need this to enable

(Get-Cluster).S2DEnabled

what you just did is making the local disk turn in to usable cluster disk.

image

 

to create a basic Storage pool

New-StoragePool  -StorageSubSystemName Owadays01.AZUTFS.local -FriendlyName OwadaysSP01 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -friendlyname "Clustered Windows Storage on Owadays01" | Get-PhysicalDisk)

image

|Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "IODisk" -AllocationUnitSize 65536 -Confirm:$false

 

image

#Query the number of disk devices available for the storage pool
(Get-StorageSubSystem  -Name Owadays01.AZUTFS.local | Get-PhysicalDisk).Count

image

 

Mirror storage spaces

Mirroring refers to creating two or more copies of data and storing them in separate places, so that if one copy gets lost the other is still available. Mirror spaces use this concept to become resilient to one or two disk failures, depending on the configuration.

Take, for example, a two-column two-way mirror space. Mirror spaces add a layer of data copies below the stripe, which means that one column, two-way mirror space duplicates each individual column’s data onto two disks.

Assume 512 KB of data are written to the storage space. For the first stripe of data in this example (A1), Storage Spaces writes 256 KB of data to the first column, which is written in duplicate to the first two disks. For the second stripe of data (A2), Storage Spaces writes 256 KB of data to the second column, which is written in duplicate to the next two disks. The column-to-disk correlation of a two-way mirror is 1:2; for a three-way mirror, the correlation is 1:3.

Reads on mirror spaces are very fast, since the mirror not only benefits from the stripe, but also from having 2 copies of data. The requested data can be read from either set of disks. If disks 1 and 3 are busy servicing another request, the needed data can be read from disks 2 and 4.

Mirrors, while being fast on reads and resilient to a single disk failure (in a two-way mirror), have to complete two write operations for every bit of data that is written. One write occurs for the original data and a second to the other side of the mirror (disk 2 and 4 in the above example). In other words, a two-way mirror requires 2 TB of physical storage for 1 TB of usable capacity, since two data copies are stored. In a three-way mirror, two copies of the original data are kept, thus making the storage space resilient to two disk failures, but only yielding one third of the total physical capacity as useable storage capacity. If a disk fails, the storage space remains online but with reduced or eliminated resiliency. If a new physical disk is added or a hot-spare is present, the mirror regenerates its resiliency.

Note: Your storage account is limited to a total request rate of up to 20,000 IOPs. You can add up to 100 storage accounts to your Azure subscription. A storage account design that is very application- or workload-centric is highly recommended. In other words, as a best practice, you probably don’t want to mix a large number of data disks for storage-intensive applications within the same storage account. Note that the performance profile for a single data disk is 500 IOPs. Consider this when designing your overall storage layout.

https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storage-limits

image

Now that the storage pools are in place we can do some measurements on the Speed creating disk and iops. based on Refs and NTFS

these disk I’m using for the Scale out file server

New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskREFS14 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS –Size 2000GB

image

New-Volume -StoragePoolFriendlyName OWASP1 -FriendlyName OWADiskNTFS15 -PhysicalDiskRedundancy 1 -FileSystem NTFS –Size 20GB

image

image

With some disk changes and creation you can say the REFS with clustered shared volume is about 100x as fast!

 

Now that we have Cluster Storage I’m using this for the SOFS.

#create the SOFS 
New-StorageFileServer -StorageSubSystemName Tech-SOFS.AZUTFS.local -FriendlyName Tech-SOFS -HostName Tech-SOFS -Protocols SMB

 

image

Adding the disk and the next test is ready.

 

First we make a couple a disk on the REFS share 

image

image

so a 1TB disk creation is not much slower than a 100GB file remember these are fixed files.

When I do this on the NTFS volume and create a 100GB fixed disk this took forever after 10 Min I stopped the command. this is why you always do a quick format on a ntfs disk.

image

A 1Gb disk creation is a better test as you can see this is around 8 times slower with a 1000x smaller disk.

image

 

Let test IOPS for this I use the DISKSPD tool : Diskspd Utility: A Robust Storage Testing Tool (superseding SQLIO)

https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

 

So the disk creation is way way faster and when using this in a Hyper-v deployment the VM creation is way way faster en also the copy of files.

 

image image

I did only READ test ! If you want also the Write test use –w1  the –b is the block size

Testing on REFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

image

When using a little 10 sec burst we got high rates but this is not the goal.

C:\run\diskspd.exe -c10G -d10 -r -w0 -t8 -o8 -b1024K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

image

Testing On NTFS

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b64K -h -L \\tech-sofs\Tech-NTFS01\testfil1e.dat

image

 

image

 

So basically you get much more IOPS then on a normal single disk but it all depends on block size configuration and storage usage normal or premium.

The main thing is if you want fast iops and machines it can be done in Azure it Will cost you but it is also expensive on premise.

C:\run\diskspd.exe -c10G -d100 -r -w0 -t8 -o8 -b4K -h -L \\tech-sofs\Tech-REFS01\testfil1e.dat

and with several runs you can get some nice results

image

but the the config I used is around the $30. total per hour

A8 and A9 virtual machines feature Intel® Xeon® E5 processors. Adds a 32 Gbit/s InfiniBand network with remote direct memory access (RDMA) technology. Ideal for Message Passing Interface (MPI) applications, high-performance clusters, modeling and simulations, video encoding, and other compute or network intensive scenarios.

A8-A11 sizes are faster than D-series

https://azure.microsoft.com/en-us/pricing/details/virtual-machines/

 

Greetings,

Robert Smit

Cloud and Datacenter MVP ( Expertise:  High Available )

Posted January 5, 2016 by Robert Smit [MVP] in Windows Server 2016

Tagged with

  • Twitter

  • Follow

    Get every new post delivered to your Inbox.

    Join 1,850 other followers

    %d bloggers like this: