Migrate VHD Disks to Azure Disks – Direct-upload to Azure managed disks #Azure #Upload #Disk #Migrate #VHD #storage #MVPBuzz #WIMVP   Leave a comment

When I saw this new option I thought well could be interesting, prep some disks in advance and upload later the disk.  Looks quicker than staging the vhd first. There are two ways you can bring an on-premises VHD to Azure as a managed disks:

  1. Stage the VHD into a storage account before converting it into a managed disk. 
  2. Attach an empty managed disk to a virtual machine and do copy.

Both these ways have disadvantage.The first option requires extra storage account to manage while the second option has extra cost of running virtual machine. Direct-upload addresses both these issues and provides a simplified workflow by allowing copy of an on-premises VHD directly as a managed disk. You can use it to upload to Standard HDD, Standard SSD, and Premium SSD managed disks of all the supported sizes. With this new option Migration  could speed up and it seems less work.

Now days Microsoft want to do a lot in the Azure CLI, Working with this and personally I like the Azure CLI to do quick things but for testing and building I like the PowerShell options. So in this blog post I show you how to do upload your VHD to a managed Azure disk.

Starting this I noticed the weirdness of PowerShell I did not have the proper options, It seems I run some older versions of the Azure Az module.

SO running new Azure options with PowerShell make sure you run the latest version. This is not needed in the Azure CLI.

I had version 2.7.0 running and I needed 2.8.0 Do a uninstall of the old version  

Uninstall-AllModules -TargetModule Az -Version 2.7.0 –Force

Or if you have a lot of old versions running uninstall them all.

$versions = (Get-InstalledModule Az -AllVersions | Select-Object Version)
$versions[0..($versions.Length-2)]  | foreach { Uninstall-AllModules -TargetModule Az -Version ($_.Version) -Force }

 

image

And of course you can run this in the Azure CLI  with the following command

az disk create -n mydiskname1 -g disk1 -l westeurope --for-upload --upload-size-bytes 10737418752 --sku standard_lrs
 
image
 
image

 

image

 

But where is the fun on doing this, Right.

For creating a Managed disk in the GUI there are only a few steps but then you need to add this to a Virtual machine and copy over the data. time consuming

image

 

image

 

Lets create a powershell script that will pick the right disk size and upload the VHD to Azure as a Managed disk.

First we need to see what size my VHD file is to make sure the disk has enough disk space.

$vhdSizeBytes = (Get-Item "I:\Hyperv-old\MVPMGTDC01\mvpdc0120161023143512.vhd").length

image

So I need a disk size of 136367309312

Our next step is create a proper disk configuration. with placement in the correct region and resource group.

 

#Provide the Azure region where Managed Disk will be located.

$Location = “westeurope”

#Provide the name of your resource group where Managed Disks will be created.

$ResourceGroupName =”rsguploaddisk001”

#Provide the name of the Managed Disk

$DiskName = “mvpdc01-Disk01”

New-AzResourceGroup -Name $ResourceGroupName -Location $location

$diskconfig = New-AzDiskConfig -SkuName ‘Standard_LRS’ -OsType ‘Windows’ -UploadSizeInBytes $vhdSizeBytes -Location $location -CreateOption ‘Upload’

$diskconfig

image

 

Now that the configuration is set we can actual create a new Disk.

New-AzDisk -ResourceGroupName $ResourceGroupName  -DiskName $DiskName -Disk $diskconfig

image

Now that the disk is created we can see this in the Azure portal also.

 

image

The details of the just created disk.

image

Comparing the disk configuration this is now empty and the Disk state is ReadyToUpload. 

imageimage

At this point we don’t have access to the disk and we can’t upload the original disk to the Azure Managed disk. Therefore we need to grand access to this disk. This is done in a time frame like 24 hours or shorter it depends on the time that is needed for the upload.

basic default is 24 hours = 86400 seconds but when done we revoke the access.


$diskSas = Grant-AzDiskAccess -ResourceGroupName $ResourceGroupName -DiskName $DiskName -DurationInSecond 86400 -Access ‘Write’

image

And in the Portal you can see the Ready status is changed to Active Upload.

image

When looking at the details of the disk in PowerShell we see the disk state of active upload.

$disk = Get-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName

$disk

image

Our next step is copy the VHD to the Azure Disk

AzCopy.exe copy "I:\Hyperv-old\MVPMGTDC01\mvpdc0120161023143512.vhd" $diskSas.AccessSAS –blob-type PageBlob

As I did not place any restrictions to the upload It will use my full bandwidth of Internet, this means a full 1Gbps connection.

 

image

 

image

image

Now that the Upload is completed we can revoke the access 

Revoke-AzDiskAccess -ResourceGroupName $ResourceGroupName -DiskName $DiskName

image

 

image

As you can see the disk state is now unattached and we can create a VM with this disk.

imageimage

The Disk type can’t be changed at this point but can be changed when the VM is deployed.

image

Machine is quickly build and depending on the machine type you can change the disk type to SSD

image

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted October 18, 2019 by Robert Smit [MVP] in Azure, Windows Server 2019

Tagged with , ,

Starting with Azure NetApp Files is it better than Storage Spaces Direct in Azure. #Azure #NetApp #storagespaces #S2D #diskspd #WVD # Cloud #MVPBuzz #WIMVP   5 comments

Recently I did a blog post on the Azure VM size limit and what disk performance you get. Picking the right VM in Azure is important when you use Azure Storage SSD disks, as machines are limited in throughput. With a D64s_V3 and ultra SSD disk I get the 81544 IOPS that’s good but costly the VM is $5K then the disk will cost you also some $$. 

This post is not about costs or good or bad but it will show you that picking random resources will cost you more that a selective menu and get even better performance ad a lower cost.

This blog post is just as a reference, measurements in your config may be different, read the blog comments.

https://robertsmit.wordpress.com/2019/07/09/azure-vm-vs-disk-vs-costs-does-size-matter-or-a-higher-price-for-better-specifications-azure-storage-performance/

#Azure #NetApp #storagespaces #S2D #diskspd #WVD # Cloud #MVPBuzz #WIMVP HTML5 #WVD #RDS #VDI #RDP #RDmi Security Center #Azure #NSG #Network Windows Server 2019 File Server clustering With powershell or GUI #Cluster #HA #Azure #WindowsAdminCenter #WindowsServer2019

below is my biggest IOPS number I have seen in Azure with a Diskspd test and this is not bad at all and the cost are not even worse that the D64 this is done with a H16r cost $1,472 per month

###########################################

Good comments are made below in the blog comment the measurements could point to caching. Also always build a solution that is supported by the vendor.

##################################

Getting massive IOPS* in a VM is still not plug and play. (with Caching)

Azure NetApp Files 1200K IOPS

In my former blog post I tested some VM to get the maximum speed and selecting the right Azure VM can save some costs

Azure NetApp Files

When using VM’s with disk attached then the storage throughput is important, in a Azure VM cluster with Storage spaces direct the network is also important. Now with SMB storage the VM well it’s a VM but the setup of this VM is more important, what is the Network bandwidth and can it do #RDMA ? Basically a 1Gbps can do 95 Mb/s that’s not bad but we want to do more right.

But what if we use other storage will this be the same or different ? to give a good answer on this I’m doing the same test as I did in the blog post

https://robertsmit.wordpress.com/2019/07/09/azure-vm-vs-disk-vs-costs-does-size-matter-or-a-higher-price-for-better-specifications-azure-storage-performance/

https://robertsmit.wordpress.com/2019/07/09/azure-vm-vs-disk-vs-costs-does-size-matter-or-a-higher-price-for-better-specifications-azure-storage-performance/

here I got 80.000 IOPS not bad but at what costs. Storage is cheap in Azure but Performance cost a lot. and then the latency of the disks. It is all part of your solution.

https://azure.microsoft.com/en-us/services/netapp/

In this case I’m using the new Azure NetApp files It’s NFS or SMB and the pricing is different than the Azure disks. Cheaper or more Expensive it al depends on your solution and performance what you need. but your solution may need a different setup, as this is a SMB solution and not a direct attached disk.

Azure NetApp Files

A comparison between the Azure Files and the Azure Netapp Files and Azure Disks

image

Big numbers but it comes with a BUT for 320K IOPS on a 500TB pool times the $0.39 per GB that’s also a big cost So handle with care. but it’s a lot off IOPS.

image

Azure NetApp Files Comes in 3 options Standard, Premium and Ultra Seems fast.

Azure NetApp Files

Well lets test this and see where the difference is between premium and ultra. We add the AzureNetapp files to Our Azure subscription.

hero-img-cloud-sync-4

For starters the subscription need to be white listed before we can use this. We create NetApp Account and then we create some pools and Disks

Azure NetApp Files 

Now we have a virtual NetApp Storage device. We can create pools and volumes to work with.

image

For SMB Connections we need to join the Active Directory first.  As we need to create a Fileserver name similar like a SOFS server. This is created in your own Active directory and not in the Azure AD. Fileserver Computer Accounts will be created in your AD

Azure NetApp Filesimage

Whit these settings we have a similar configuration as the on premise config with RDMA Hyper-v or S2D or Azurestack HCI with these thoughts we can get big performance goals.

A user account that can create computer objects in the OU.  The LDAP structure is without the DC= part and My OU is in the Root so a short syntax.

#Azure #NetApp #storagespaces #S2D #diskspd #WVD # Cloud #MVPBuzz #WIMVP HTML5 #WVD #RDS #VDI #RDP #RDmi Security Center #Azure #NSG #Network Windows Server 2019 File Server clustering With powershell or GUI #Cluster #HA #Azure #WindowsAdminCenter #WindowsServer2019

Here you can see the just created link to the Domain.

Azure NetApp Files

Our next step is creating a Capacity pool These can be Large but remember pay per use!

imageimage

Remember you will pay for the provisioned capacity, and here is the catch the larger the Pool the more IOPS you get. So if you have a tiny application that need performance you need to see what is the right choice.

Now that the Pool is created we can create a volume in the pool, all the basics are similar to the Microsoft Storage Pools.

Create Volume off 100GB in the 4 TB pool. the Costs are 4TB and not the 100GB.

Azure NetApp Files

I create a new Storage network, this is also a must a dedicated storage network for the NetApp files. If you are familiar with Storage spaces direct and multichannel SMB then you have a new playground, but less building.

image

image

Network is done File server is created and share is created, at this time no ACL’s are available on the share.

image

image

When the setup is done we see a File server in the AD structure. Netapp is using only the given name but will add a –XXXX to make the name unique. that makes sense as the deployment will fail on this. 

Azure NetApp Files

My first test on the 100GB share. Testing this with a Azure VM D2s_v3 Capable of delivering 4000 IOPS on the Disks

image

image

That is impressive almost 29.000 IOPS on a simple VM. well in my volumes I have just 4TB so 4x 4000 IOPS not bad

Azure NetApp Files

So a bigger machine is not needed than, well now the VM selection should not be on the Disk size but on the network throughput and more network adapters is needed the max out the storage as the D machine is having a 1Gbps nic this is to low for this.

Azure NetApp Filesimage

Impressed by the low latency numbers and how easy you got the performance.

Azure NetApp Files

Hitting the Network adapter limit Winking smile

image

 

image

29K IOPS on a cheap Azure D2 VM that’s not bad at all.

Good performance but I want more lets create some bigger pools and better network machines. Thinking about SMB direct or some big pool to hit real hard the storage to get big numbers. Like the demo’s of Jeff Woolsey @WSV_GUY

image

Looking into the resource group, here you can see the created resources and two NIC’s that’s resource I created a premium and an Ultra pool.

I can Imagine that this nic will be a throttle, in a big Pool hitting all the data to one NIC. 

Azure NetApp Files

Let me start a H8 Azure VM this is a new VM better network performance. Should be good.  pricing below $1000 monthly

Azure NetApp Files

Azure NetApp Files with a Premium 16TB disk

Azure NetApp Files

Low Latency good performance, with Azure disk I needed more disks and a bigger VM to get these results.

Azure NetApp Files

Same test with Ultra 16T Storage pool

imageimage

imageimage

No big changes between the Premium and Ultra and the cost are $0.10 more per GB 

So a new test  this time with a H16R nic with RDMA with this we can optimize the network performance a bit

image

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-hpc#h-series

Azure NetApp Files

Well that’s good eh better than good big performance, but I must say this is not the default H16r machine I tweaked it a bit and used the same technique as when you build a Azurestack HCI or Hyper-v over SMB*

the measure results could be partly cached, for facts I should do multiple runs, but all the tests are done only once, as these setups are expensive and testing takes time, I have no benefit on any result good or bad it is just my setup and opinion,

Azure NetApp Files

I was to late to grab a decent CPU and nic performance screenshot.

I was planning to the same results on a A9 VM on my Windows Virtual desktop with the FSLogix profiles (user profile disks) but Running these is a A machine compaired a H or N series an NC24r or a ND40 or 24 cost some $ but huge performance. This is something for a next blog post.

image

But initial test where not better and the A9 VM is more expensive, and gives me thoughts on different options. what and how Can I extend the Azure networks to get the ultimate sweet spot cost vs performance.

I think this is a real good replacement for Azure S2D, its cheaper and faster, Suppose you run a RDS site with lots of User profile disks or Windows virtual desktop with FsLogix, build profiles high available will take you to a storage space direct cluster at least two nodes and 6 Disk. Replacing this with Azure NetApp files could save you some $$ and sure lots of options are in the cloud there and yes you may need a extra network, but if you can offload your network with some extra network adapters and have a storage network to boost the performance, then you have a great solution in the Cloud.

Read the comments below, and always test your setup to see the results before you go into production. Also read the requirements so that your config is supported.

I’m saving my Azure credits to do a massive S2D run with the Azure Ultra disks. Keep in mind all my configs are different and nothing is next next finish and my be not supported.

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

 

Posted August 1, 2019 by Robert Smit [MVP] in Azure

Tagged with

Happy SYSTEM ADMINISTRATOR — APPRECIATION DAY – 20th Annual #SysAdminDay #Sysadmin #MicrosoftMVP #MVPBuzz #WIMVP   Leave a comment

There are a lot of things you can say but still I think this song makes a good point. (youtube link)

image

Click the picture

 

Also a must see :

image

 

 

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted July 26, 2019 by Robert Smit [MVP] in sysadminday

Azure Security Center: How to Protect Your Datacenter with Next Generation Security   Leave a comment

Join this Free Webinar With Thomas Maurer and Andy Syrewicze.

Azure Security Center: How to Protect Your Datacenter with Next Generation Security

Security is a major concern for IT admins and if you’re responsible for important workloads hosted in Azure, you need to know your security is as tight as possible. In this free webinar, presented by Thomas Maurer, Senior Cloud Advocate on the Microsoft Azure Engineering Team, and Microsoft MVP Andy Syrewicze, you will learn how to use Azure Security Center to ensure your cloud environment is fully protected.

The webinar covers:

· Azure Security Center introductions

· Deployment and first steps

· Best practices

· Integration with other tools

· And more!

Being an Altaro-hosted webinar, expect this webinar to be packed full of actionable information presented via live demos so you can see the theory put into practice before your eyes. Also, Altaro put a heavy emphasis on interactivity, encouraging questions from attendees and using engaging polls to get instant feedback on the session. To ensure as many people as possible have this opportunity, Altaro present the webinar live twice so pick the best time for you and don’t be afraid to ask as many questions as you like!

There are certain topics in the IT administration world which are optional, but security is not one of them. Ensuring your security knowledge if ahead of the curve is an absolute necessity and becoming increasingly important as we are all becoming exposed to more and more online threats every day. If you are responsible for important workloads hosted in Azure, this webinar is a must.

Webinar: Azure Security Center:

How to Protect Your Datacenter with Next Generation Security

Date: Tuesday, 30th July

Time: Webinar presented live twice on the day. Choose your preferred time:

● 2pm CEST / 5am PDT / 8am EDT

● 7pm CEST / 10am PDT / 1pm EDT

Save your seat

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted July 25, 2019 by Robert Smit [MVP] in Altaro

Tagged with

Its almost there SySAdminDay System Administrator Appreciation Day July 26, 2019 – 20th Annual. Your chance to WIN BIG with #altaro #SySAdminDay @AltaroSoftware   Leave a comment

Your network is secure, your computer is up and running, and your printer is jam-free. Why? Because you’ve got an awesome sysadmin (or maybe a whole IT department) keeping your business up and running.

Show your appreciation

Friday, July 26, 2019, is the 20th annual System Administrator Appreciation Day. On this special international day, give your System Administrator something that shows that you truly appreciate their hard work and dedication.

Source: https://sysadminday.com/

 

At this point vendors are giving some nice Swag away Take a peek at Altaro , They are giving the option to test some software and the option to win great prices.

How to enter the contest & WIN

  1. Download Altarto VM Backup by filling in the form above
  2. Install Altaro Backup (takes < 15 mins)
  3. Win a guaranteed €20 Amazon voucher

To get some Prices go to https://www.altaro.com/sysadmin-day/

https://www.altaro.com/sysadmin-day/

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted July 24, 2019 by Robert Smit [MVP] in Altaro

Tagged with

Azure VM vs Disk vs Costs, Does Size matter ? or a Higher price for better specifications #Azure #Storage #Performance   1 comment

Building in Azure is easy and the wizard takes you to all the steps and you have a working VM. choosing the right size is different often it has a link to the on premise world 4 core CPU and 8 GB memory. and the disk I need 1 TB disk space. All simple but then things get complicated the performance needs to be better CPU is fine Memory well 60 %  plenty of disk space. Bigger VM perfect.

Still slow Whole VM runs at 20-60% users are complaining must be this Azure thing someone else his computer runs slow.

I often hear this. But is it really slow or is your measurement wrong ?

When you pick a machine on premise what do you take performance or Cost ? <> performance and then cost right and at the end you settle with the cost vs. performance.

But in Azure what do you take performance or Cost ?<> 100% Costs, VM’s are expensive.  This is not always wrong but sometimes is paying a bit more the best approach

In my sample here I show you the performance in a Disk with different machine types, and not picking the right components doesn’t give you the right performance. but it may well function on your workload, but then you may pay to much for you over sized configuration.

In my sample I need a VM with 300 IOPS and one with 4000 IOPS and I need one with 27000 IOPS CPU and Memory are in this case not important as it is more i/o intensive.

I pick a default Azure VM an D machine, put some disks to the machine one HDD-S30 ,SSD-E30 ,SSD-P30,SSD-P60 

 

VM Type Disk Type MiB/s I/O per s
Standard D2s v3 (2 vcpus, 8 GiB memory) HDD-S30 2.01 514.23
  SSD-E30 2.21 566.27
  SSD-P30 13.29 3403.51
  SSD-P60 12.33 3157.46

 

First goal met 500 IOPS and an cheap machine but this could also an Azure B type VM much cheaper. then I wonder why use SSD over HDD for the IOPS it’s the same speed and latency there is a point SDD are performance steady, but for normal workload. Costs If you have a lot of transactions then SDD may be cheaper. A fact is nobody knows how expensive the HDD disk are, have you ever calculated the Storage transactions ?

image

below is a overview of the disk latency.

25th |    100.325 |    N/A |    100.325 HDD-S30

25th |    100.012 |   N/A |    100.012 SSD-E30

25th |      4.545 |    N/A |      4.545   SSD-P30

Comparing all the SSD disks and pick the right performance is not hard Microsoft did a great job on explaining this. on Microsoft docs

Disk size

Premium SSD sizes P30 P40 P50 P60 P70 P80
Disk size in GiB           1,024 2,048 4,096 8,192 16,384 32,767
IOPS per disk           Up to 5,000 Up to 7,500 Up to 7,500 Up to 16,000 Up to 18,000 Up to 20,000
Throughput per disk           Up to 200 MiB/sec Up to 250 MiB/sec Up to 250 MiB/sec Up to 500 MiB/sec Up to 750 MiB/sec Up to 900 MiB/sec

When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity, IOPS, and throughput of that 

 

When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity, IOPS, and throughput of that

that is interesting In my D2 machine and with a P30 I got only 3400 IOPS, so this is wrong ? Well according to the disk but the VM can only deliver 3200 IOPS with the 3400 IOPS delivered its perfectly normal then.

image

 

The same test again with a better Azure VM and the same disks.

 

VM Type Disk Type MiB/s I/O per s
Standard DS3 v3 (4 vcpus, 14 GiB memory) HDD-S30 2.01 514.01
  SSD-E30 2.21 566.63
  SSD-P30 21.58 5523.51
  SSD-P60 51.00 13056.39

 

The requirements are there 5500 Iops for a disk that need to deliver 5000 IOPS that’s good. but what about the P60 disk , again a had cap to the VM max of 12800 IOPS

The latency is not that different for this you need a different kind of VM

25th |    100.256 |        N/A |    100.256  HDD-S30

25th |    100.008 |        N/A |    100.008 SSD-E30

25th |      4.416 |        N/A |      4.416 SSD-P30

25th |      2.135 |        N/A |      2.135  SSD-P60

Comparing the Azure VM’s selected on IOPS and select the right machine

imageimage

 

selecting the F4 VM that can deliver 16000 lops according the sheet .

VM Type Disk Type MiB/s I/O per s
Standard F4s (4 vcpus, 8 GiB memory) HDD-S30 2.01 514.01
  SSD-E30 2.21 566.63
  SSD-P30 21.58 5523.51
  SSD-P60 50.85 13018.46

 

Did not get the 16.000 lops in fact it produce almost the same results ad the DS3 only double the costs.

SSD-P60 latency measurement 4k blocks vs 64K blocks

25th |      2.171 |        N/A |      2.171

25th |      3.088 |        N/A |      3.088  <> 64kblocs

So this strange big machine still not hitting the limits CPU and memory is low. Seems good but not the performance

image

image

Checking the Microsoft site : https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-compute

You can see a different specs result. this means the machine can’t deliver the IOPS and the Size table thinks he can. Results are bad.

Standard_F4s_v2 4 8 32 8 8000 / 63 (64) 6400 / 95 2 / 1750

 

Then lets pick a Azure VM than can deliver the iops. a F16 big VM costly but can it deliver I compare both tables In the Azure portal and the Docs

  But on the other side on the Docs https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-compute

Standard_F16s_v2 16 32 128 32 32000 / 255 (256) 25600 / 380 4 / 7000

 

VM Type Disk Type MiB/s I/O per s
Standard F16s v2 (16 vcpus, 32 GiB memory) HDD-S30 2.01 514.09
  SSD-E30 2.21 566.63
  SSD-P30 21.60 5529.96
  SSD-P60 63.76 16321.29

 

This looks OK now 16000 IOPS.

But what If I build a stripe set from the SSD-P30 and SSD-P60 and HDD-S30 and SSD-E30 what would be the iops ? (it’s a bad idea to mix different disk types this is just a sample)

What if we create a stripe set ?

image

Worse performance than if I user the SSD-P60 alone. Bad config to do this. 

 

HDD and SSD

image

Both Disks have around 500 IOPS each and now they can produce a 1000 IOPS that’s not bad

But what happens if I combine all the disks into a Storage space direct ? combining all the disk you have and build a new disk JBOD.

image

Also a Bad Idea and a waste of resources and Money an P60 disk combined with a S30

That’s all about the little side step, but it keeps me thinking…. -What if

Below is a list with similar iops performance  And Instead of using 1 SSD-P60 I’ll use 3 disks on paper I should have 3x 16000 IOPS = 48000 IOPS and 3x 500MB/s =1500 MB/s that is massive right. stripe set or Storage space or storage space direct ? all valid options but what machine do I need to handle the performance.

image

I selected 3 types a E32,DS5 and a DS14 all with big price difference but similar specs .

Standard_E32s_v3 2 32 256 512 32 64000 / 512 (800) 51200 / 768 8 / 16000
Standard_DS5_v2 16 56 112 64 64000 / 512 (688) 51200 / 768 8 / 12000
Standard_DS14_v2 3 16 112 224 64 64000 / 512 (576) 51200 / 768 8 / 12000

 

First I build a Storage Pool on the DS5_V2

image

Nice Capacity good latency and decent performance a round 29000 IOPS of 3 disks, in a Mirror set I’ll loose a disk so the performance is good better than I expected.  To hit the limits I should add 2 more disks to this config and see if they can handle the performance.

25th |      2.025 |        N/A |      2.025

image

I’ll run the same test on a E32-8s_v3

Bigger VM much more performance, higher price.

image

So overall the cheaper VM can produce the same disk performance. but the machine is $1000 cheaper per month. Again it depends what you are doing with the VM

Now the same configuration with Storagespaces Direct just to see if the performance is better, keep in mind that every run the machine performance can be a bit different so in the same range I see this as the same performance.

The S2D results on a E32 VM

image

And even a step higher an expensive VM with 432 GB memory. With an S2D Cluster.

 

image

So same performance when Running a StorageSpace or S2D cluster and no change on the machine type. in fact the DS5 machine is slightly better. it saves $2000 per month. If you don’t need the CPU and memory from the VM.

image

image

So size does matter but it depends on what size you are looking right. Azure is like Lego but different. Combining the pieces makes a great solution.

Below I created a table Cost vs performance, I also compared the datasheet in the azure portal to the DOC pages and I think you should keep this page as a reference. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-memory

image

This shows you that in complex configurations there is no one size fits all and it comes to testing and adjusting, Tools may help you but picking the right VM size and choose the right storage can take some time. As in this I only compared disks but what if I choose Netapp files or some other disks like ultra SSD’s

And Now I did this config with 3 P60 disk that cost  $1000 each = $3,121.92 (in azure Calculator) it gets me ~30.000 IOPS

Now On the DS5 machine a 2 way mirror Auto created.

image

It nags me that I can’t get the max from the VM, the must be something wrong in my configuration. lets do some quick testing change VM and Disk types

With 6 times a SSD-P30 disk  I’ll get 27.000 IOPS on the DS5 Machine

image

When using a Stripeset this hits the VM limit of 768 true put. Less IOPS but more speed. So Configuration is also KEY in the used hardware.

image

Lets tweak the config a bit and see if we can pass the 50.000 Iops and hit the machine limit.

image

With read cache enabled and 8 P30 disks. that’s not bad right.

image

The P40 disks have 7500 IOPS each will this break the record ?  (6x P40 disk storage space)

image

First test same result a bit lower, but there is more to get. Testing now With 8 P40 disks

(8x P40 disk storage space)

image

(8x P40 disk storage space) Manual configuration.

image

(8x P40 disk storage space) Manual configuration. with 6 columns

image

That’s not bad the DS5 hits the limit.

On Microsoft Ignite 2015 Mark Russinovich did a demo, where he showed a virtual machine with Premium Storage that hit over 64,000 IOPS. Well This beats the record but the Azure hardware is much better now right.

Lets Switch to some big Azure VM

image

64 Cores lets see If I can use some of these cores in the S2D config.

image

image

Oh ok it seems I need more cores or less workload on this.  But easily hit the IOPS limit on this machine.

image

image

 

Overall in this is what do you need and test this also with a different configuration. Not only on price but also on performance.  In the first section I used 3x a P60 disk cost $3.000 a even better result I get with 8x P30 disk cost $1.000

Picking the right configuration can only be don based on testing and create some references for you. Azure machines and storage is changing all the time its getting better all the time. It all depends on your workload but there is no one size fits all !

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted July 9, 2019 by Robert Smit [MVP] in Azure

Tagged with , ,

Renewed as Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsider   Leave a comment

 

I am proud to announce that I was awarded by Microsoft, with the Microsoft Most Valuable Professional (MVP) Award for 2019-2020 in the category Microsoft Azure. I also hold a MVP Award in Windows Insider #WIMVP. This is my 11th Microsoft MVP award since 2009, and I couldn’t be more excited about this one.

I migrated my Self too the Cloud, took my 11 years to get from On premise to the Azure Cloud. But still looking forward to see the new Azure previews and write blogs,workshops,etc.

A big thank you for the Blog readers and twitter @ClusterMVP Followers Thanks!  

robert smit Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsider  robert smit Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsider

The first Award was in 2009 as Cluster MVP, this was a small group and since then the group merged to Cloud and Datacenter.

 

robert smit Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsiderrobert smit Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsider

Some Impressions of the MVP status.

Who are MVPs?robert smit Microsoft MVP for 2019-2020 Switching to Azure #MVPBuzz #MVPAward #Azure #MicrosoftMVP #WIMVP #windowsinsider

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs make up a global community of over 4,000 technical experts and community leaders across 90 countries and are driven by their passion, community spirit, and quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

Source https://mvp.microsoft.com/en-us/Overview

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Posted July 3, 2019 by Robert Smit [MVP] in MVP Award

Tagged with

  • Twitter

  • RSS Azure and Microsoft Windows Server Blog

  • %d bloggers like this: