Part2 Ultimate Step to Remote Desktop Services HTML5 QuickStart Deployment #RDS #VDI #RDP #RDmi   Leave a comment

Ready for Part 2 of the RDS setup.  As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

Then I did the Part 1  Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi https://robertsmit.wordpress.com/2018/01/15/part1-ultimate-s…s2d-vdi-rdp-rdmi/

Where I decided I do a blog on how to build my perfect RDS environment and yes it always depends but some components are just there to use in Azure. I did cover all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

As Microsoft Ignite is behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there (see Channel 9 https://channel9.msdn.com/Shows/OEMTV/OEMTV1760 ). Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. and usage of the HTML5 client. On your own build you can have those benefits also.

Building the RDS on Premise is not multi domain It all needs to be in one domain.  But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

The basic Remote Desktop Services with HTML5 I build is below. in Part 1

image_thumb6

When you don’t have the right performance in your RDS host and you are running this in Azure like me you can always change the RDS host size. Currently I use all BxMs machines Good for making Blog posts and save some costs. and running this with minimal load it performs well.

image

We have the RDS farm in place and we added the HTML5 client – the Bits are for preview users only there for there is not a dive deep yet on the installation.

But the HTML5 client is the same as on the Remote desktop services modern infrastructure the only difference is that you are using your own RDS setup just the way you always did in server 2016 (see part1)

HTML5

Now that the RDS site is up and running, we can take a look at the new HTML5 client. Running this combined with the default RDS page makes it easy to test.

The usage is a bit different but I must say it is fast and instead of multiple windows open it all opens in just one tab with sub icons. in the browser.

image_thumb[3]

As you can see a lot of sub icons in the bar but there is only one tab open. In this case there is more offloading to the RDS host. With using less local compute power.

Remote Desktop Services HTML5

So you can use less heavy clients and work faster & better

Remote Desktop Services HTML5

All the Explorers are combined to one single icon. (Everything is running in the back ground)

Remote Desktop Services HTML5

All the applications that started more than once are combined in the upper bar

So Connection is made on just the same method.

image_thumb

 

imageimage

the web client is added to the RDS site and if you want to make this page default you can easy change this.

image

In the HTTP redirect use the webclient.

Remote Desktop Services HTML5

A nice option is that publishing the RDP client it opens also in the Tab and Checking the Memory usage.

image_thumb[22]image_thumb[23]

It is less than expected, this is on the client. and still We have some applications open.

Remote Desktop Services HTML5

On the back ground (RDS server) you can see all the processes are there. And running the 32 bit Internet explorer eating memory.

image_thumb[25] image_thumb[26]

Above the task manager of the RDS host the first is the HTML5 usage and the second is the default RDS usage.

below all the icons on the taskbar instead of one browser tab.

Remote Desktop Services HTML5

See the load on the local machine based on the above workload.

 

image_thumb[30]

That is all for now In the next part I’ll show you more on deployment and the RD modern Infrastructure.

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Advertisements

Posted January 17, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Part1 Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi   Leave a comment

As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp

https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

that’s Covering all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

I started this post a while ago and thinking about the best configuration but every time there is a little thing well maybe this isn’t the best. With that in mind I started to this blog post at least 6 times. And the best configuration is always “it depends” there are so many options and it is hard to say one size fits all it never is.

As Microsoft Ignite is just behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there. Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. Building the RDS on Premise is not multi domain It all needs to be in one domain.

RDmi (RDS modern infrastructure)

But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

But if you can make use of the Azure Security Center and point the Webroles to the Azure Application Proxy.

RDmi (RDS modern infrastructure)

As there is no default firewall I used a AAD application Proxy to access the Remote desktop Gateway website.

RDmi (RDS modern infrastructure) 

The Configuration is not that hard and well documented on the Microsoft Doc site : https://docs.microsoft.com/en-us/azure/active-directory/application-proxy-publish-remote-desktop

In this Blog Item I want to place the RDS basics to the next level, as everybody can install a next next finish installation, but this should be a next step. There is no need for slow performance with the right configuration.

I’m using Core only except for the Session hosts or on servers that its not handy. Separated the Web Roles ,Gateway & Connection Brokers and all is High available. And in front a Traffic manager that determine what Webserver is near you. But this is “only needed” if you use multiple regions or want to separate the traffic.. The Use Profile disk will be hosted with a 3 node Storage Space Direct Cluster As I think a 3th or even a 4th node will give you more benefit of the Storage and Uptime. But this is also a “depends” you can even use a Single file server (non redundant) in this case the UPD are redundant and say I want 3 TB disk space for the UPD.  I did some performance testing and the results are below.

RDmi (RDS modern infrastructure)

with the Premium disk we got a good amount of performance.  As I’m using SMB3 storage I will also add a second nic to all my RDS hosts  for the SMB3 storage. This will take some extra steps to get the right performance.

As you could also go for a single file server with a lot off disk, It saves money as there is only one server and onetime the disks, but there is no redundancy for the UPD. But in this case the backup is easier. If you can handle the downtime and make the UPD that way that it is less important. then this is a nice option.

If you build this in Azure you must be aware that even Azure is not AlwaysOn. Therefor we need to make sure the RDS site is always Up. And again this seems to be a lot of servers and maybe you don’t want all this and want to have only one frontend server and one RD Session host it is all up to you but I think the Holy Grail is between this and a Single server.

In this case I use Powershell for the deployment.  And I deploy all VM’s from a template that way I know that All VM’s are the same in this configuration.

image

First I setup Traffic Manager this is an easy setup and based on performance. I deployed all the VM’s in azure with a Powershell script.

As all new machines are added to the server manager we can use the to add to the farm.

RDmi (RDS modern infrastructure)

When adding the machines just do one gateway and one Connection broker. then configure the RD connection broker HA Database

image

For the Connection broker Database I use a Database as an Service in Azure.

image

Just create the Database and use the connection string in the RDS farm

RDmi (RDS modern infrastructure)

On the Connection brokers you will need the Native SQL client.

https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server

https://www.microsoft.com/en-us/download/details.aspx?id=50402

Now that the Database is Connected we can add all the other servers and add the certificate.

RDmi (RDS modern infrastructure)

The Used String looks like :

Driver={ODBC Driver 13 for SQL Server};Server=tcp:mvpserver.database.windows.net,1433;Database=rdsbd01;Uid=admin@mvpserver;Pwd={your_password_here};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;

image

The SQL native client is required for the connection on all Connection brokers! 

image

Now that the connection High available mode is configured we can add another connection broker

image

 

image

Now that the connection broker is redundant we start adding some web servers

First we are adding the Web role to the new core webservers.

RDmi (RDS modern infrastructure)

Adding the Servers could take some time. Just as the Webserver we add extra connection brokers and Gateway servers. Same method.

RDmi (RDS modern infrastructure)

Even if the servers don’t need a reboot I reboot them anyway just to make sure my config is working.

RDmi (RDS modern infrastructure)

The Same we do with the Gateway role and the Connection broker. Now that all roles are added we can do some configuration.

As we already placed the RDS Database to a Azure we need to apply the Certificate to all the servers in the Farm (Web/gateway/RDCB)

RDmi (RDS modern infrastructure)

In this Configuration I use a Azure load balancing option this is Free and easy to use. I will use 3 Azure Load balancing configurations in this.

Two internal and one Public. The public gets an external IP.

image

The important setting here is the Load balancer type Public or internal

Azure Load Balancer can be configured to:

  • Load balance incoming Internet traffic to virtual machines. This configuration is known as Internet-facing load balancing.
  • Load balance traffic between virtual machines in a virtual network, between virtual machines in cloud services, or between on-premises computers and virtual machines in a cross-premises virtual network. This configuration is known as internal load balancing.
  • Forward external traffic to a specific virtual machine.

All resources in the cloud need a public IP address to be reachable from the Internet. The cloud infrastructure in Azure uses non-routable IP addresses for its resources. Azure uses network address translation (NAT) with public IP addresses to communicate to the Internet.

Building the VM’s We keep them in the same availability set as described below.

image

Update Domains

For a given availability set, five non-user-configurable update domains are assigned by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on.

Fault Domain

Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.

When creating the availability groups we are using the Managed disks and we always can change the VM Size and or disk type. That is the flexible use of Azure.

 image

If your VM(s) are deployed using the Resource Manager (ARM) deployment model and you need to change to a size which requires different hardware then you can resize VMs by first stopping your VM, selecting a new VM size and then restarting the VM. If the VM you wish to resize is part of an availability set, then you must stop all VMs in the availability set before changing the size of any VM in the availability set. The reason all VMs in the availability set must be stopped before performing the resize operation to a size that requires different hardware is that all running VMs in the availability set must be using the same physical hardware cluster. Therefore, if a change of physical hardware cluster is required to change the VM size then all VMs must be first stopped and then restarted one-by-one to a different physical hardware clusters.

image

As changing the disk type to premium we can also adjust the disk size to get more local IOPS. But the cost will get up !!

Simple and scalable VM deployment
Managed Disks handles storage for you behind the scenes. Previously, you had to create storage accounts to hold the disks (VHD files) for your Azure VMs. When scaling up, you had to make sure you created additional storage accounts so you didn’t exceed the IOPS limit for storage with any of your disks. With Managed Disks handling storage, you are no longer limited by the storage account limits (such as 20,000 IOPS / account). You also no longer have to copy your custom images (VHD files) to multiple storage accounts. You can manage them in a central location – one storage account per Azure region – and use them to create hundreds of VMs in a subscription.

Now that we have several RDS host deployed we can add them to the Farm.

Adding RDS host. Is just the seam as adding the Gateway servers or Connection brokers.

image

Now that the basics are installed We can do some configuring.

Building the UPD share you can use the blog post for Storage spaces with the SOFS  : https://robertsmit.wordpress.com/2015/05/12/windows-server-2016-with-storage-spaces-direct-building-sofs-with-storage-spaces-direct-winserv-win2016-s2d-howtopics/

But keep in mind that there is no one size fits all. Calculate how big your storage must be and do not size the total on your top users but on average usage.

Azure VM sizing is also not just pick one, a lot off new sizes are there and pick the one that you need. High performance or memory optimized does not mean you can only use that VM for that role. checkout the specs and test your vm. I think the B Sizes are promising and cheap for a lot off roles.

Check this site for your Azure VM https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

If you want a regular share then use the file server or just share a folder and use this in the RDS. but remember users are reading and writing to this share it will use bandwidth and IOPS on the disk.

Setting the File share can only be done once per RDS collection. As shown below. Create a collection and user your share for the User profile disk to land.

image

 

image

If you want to change the UPD size it can only be done in PowerShell . Also the file share Setting and changing the URL of the Gateway can only be done with powershell after the first config.

Set-RDSessionCollectionConfiguration -CollectionName Collection -MaxUserProfileDiskSizeGB 40

image

Now that everything is in place we launch the RDS webpage. As I modified the page. Just make a modified page and save it somewhere and after a new deployment Copy past it in the C:\Windows\Web\RDWeb.

image

So the page can be with or with out “ public computer”

imageimage

image

Now that the Gateway ,Connection Broker and the RDS hosts are in place we can open the web frontend. As mentioned above I customized the page a bit. (save your modifications on a save place for the next deployment)

That’s all for part 1

In the next part I’m showing you a quick overview of the HTML5 client.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 15, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Check with Powershell for Meltdown and Spectre #exploit critical vulnerabilities Protection #Meltdown #Spectre #KB4056892   1 comment

Meltdown and Spectre exploit critical vulnerabilities in modern processors. These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.

Edit:5-1-2018

Meltdown is Intel-only and takes advantage of a privilege escalation flaw allowing kernel memory access from user space, meaning any secret a computer is protecting (even in the kernel) is available to any user able to execute code on the system.

Spectre applies to Intel, ARM, and AMD processors and works by tricking processors into executing instructions they should not have been able to, granting access to sensitive information in other applications’ memory space.

Meltdown work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.

image

Microsoft is aware of a new publicly disclosed class of vulnerabilities referred to as “speculative execution side-channel attacks” that affects many modern processors and operating systems including Intel, AMD, and ARM. Note: this issue will affect other systems such as Android, Chrome, iOS, MacOS, so we advise customers to seek out guidance from those vendors.

Microsoft has released several updates to help mitigate these vulnerabilities. We have also taken action to secure our cloud services. See the following sections for more details.

Microsoft has not received any information to indicate that these vulnerabilities have been used to attack customers at this time. Microsoft continues to work closely with industry partners including chip makers, hardware OEMs, and app vendors to protect customers. To get all available protections, hardware/firmware and software updates are required. This includes microcode from device OEMs and in some cases updates to AV software as well.

The following sections will help you identify and mitigate client environments affected by the vulnerabilities identified in Microsoft Security Advisory ADV180002.

The Windows updates will also provide Internet Explorer and Edge mitigations. We will also continue to improve these mitigations against this class of vulnerabilities.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

Install the powershell module from the Gallery.

image

Install-Module SpeculationControl

image

With  Get-SpeculationControlSettings you can check your settings

image

As my system is not protected, but after all the fixes it should be like this below.

image

But you need to do more than just a software patch.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

checking the BIOS of you machine with

get-wmiobject win32_bios

image

image

As there is no later Bios from my system, I’m out off luck.  good moment to renew my test machine.

SO I need to patch my system, As I’m a windows insider I run several versions of windows. First check there was KB4056890 but this is already updated to KB4056892 make sure you get the latest version of the patch. you don’t want to patch and reboot the machine twice.

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056890

image

The Updated version!

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

In this case I installed the KB4056890 Update installation may stop at 99% and may show elevated CPU there is a fix for that read this :

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

 

image

You need a reboot for this fix.

image

Remember this is not just a Microsoft Windows thing if you are on Citrix,Xenserver,Amazon or VMWare You need to check your hardware.

https://blogs.vmware.com/security/2018/01/vmsa-2018-0002.html

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 4, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with

End of support for #DirSync and #AzureAD Sync upgrade to #Azure AD Connect before end off 2017 #Cloud   Leave a comment

Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Office 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync as these tools are now deprecated and are no longer supported as of April 13, 2017.

image

The two identity synchronization tools that are deprecated were offered for single forest customers (DirSync) and for multi-forest and other advanced customers (Azure AD Sync). These older tools have been replaced with a single solution that is available for all scenarios: Azure AD Connect. It offers new functionality, feature enhancements, and support for new scenarios. To be able to continue to synchronize your on-premises identity data to Azure AD and Office 365, we strongly recommend that you upgrade to Azure AD Connect. Microsoft does not guarantee these older versions to work after December 31, 2017.

Suppose you are on an old version below is the link to get the latest version

Microsoft Azure Active Directory Connect

https://www.microsoft.com/en-us/download/details.aspx?id=47594

  • Integrating your on-premises directories with Azure AD makes your users more productive by providing a common identity for accessing both cloud and on-premises resources. With this integration users and organizations can take advantage of the following:
    • Organizations can provide users with a common hybrid identity across on-premises or cloud-based services leveraging Windows Server Active Directory and then connecting to Azure Active Directory.
    • Administrators can provide conditional access based on application resource, device and user identity, network location and multifactor authentication.
    • Users can leverage their common identity through accounts in Azure AD to Office 365, Intune, SaaS apps and third-party applications.
    • Developers can build applications that leverage the common identity model, integrating applications into Active Directory on-premises or Azure for cloud-based applications

    Azure AD Connect makes this integration easy and simplifies the management of your on-premises and cloud identity infrastructure.

But where to find the current version of the Azure AD connect ? If we go to the management tool you can see this in the GUI

Go to the folder Microsoft Azure AD Sync

 

image

 

image

Now start the miisclient.exe and in the about there is your version number

image

Detailed Azure AD Connect: Version release history

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-version-history

If you need to upgrade you can do an in-place upgrade (Automatic upgrade)

High-level steps for upgrading from DirSync to Azure AD Connect
  1. Welcome to Azure AD Connect
  2. Analysis of current DirSync configuration
  3. Collect Azure AD global admin password
  4. Collect credentials for an enterprise admin account (only used during the installation of Azure AD Connect)
  5. Installation of Azure AD Connect
    • Uninstall DirSync (or temporarily disable it)
    • Install Azure AD Connect
    • Optionally begin synchronization

Remember Azure AD will stop accepting connections from DirSync and Azure AD Sync after December 31, 2017 Upgrade now to avoid downtime and start 2018 relaxed.

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted December 28, 2017 by Robert Smit [MVP] in Azure

Tagged with ,

#Azure Storage Spaces direct #S2D Standard Storage vs Premium Storage   Leave a comment

I see this often in the Forums Should I use Standard Storage or should I use Premium storage. Well it Depends Premium cost more that Standard but even that depends in the basic. Can a $ 4000 Azure Storage space configuration  out perform a $ 1700 Premium configuration. this blog post is not on how to configure Storage spaces but more an overview on concepts, did I pick the right machine or did I build the right configuration well it all depends.

I love the HPC vm sizes https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-hpc but also expensive.

So in these setups I did create a storage space direct configuration all almost basic. but Key is here pick the Right VM for the job.

Standard 6 node cluster 4 core 8GB memory total disks 96 Type S30 (1TB) RAW disk space 96TB  and 32TB for the vDisk

Premium 3 node Cluster 2 core 16GB memory Total disks 9 Type P30 (1TB) RAW disk space 9TB  and 3TB for the vDisk

Standard A8 (RDMA) 5 node cluster 8 core 56GB memory total disks 80 Type p20 (500GB) RAW disk space 40TB

So basically comparing both configs makes no sense Couse  both configs are different. bigger machines vs little VM

and a lot less storage.

Standard Storage storage vs Premium

The performance of standard disks varies with the VM size to which the disk is attached, not to the size of the disk.

image

So the nodes have 16 disk each 16 * 500 IOPS  and with a max bandwidth of 480 Mbps. that could be a issue as would I use the full GB network than I need atleast  125 MB/s

image

In the Premium it is all great building the same config as in the standard the cost would be $3300 vs $12000. If you have a solution and you need the specifications then this is the way to go.

Can I out perform the configuration with standard disks ? In an old blog post I did the performance test on a 5 node A8 machine and 16 premium storage P20- 500GB 40TB RAW and got a network throughput of 4.2Gbps 

image

https://robertsmit.wordpress.com/2016/01/05/using-windows-storage-spaces-direct-with-hyper-converged-in-microsoft-azure-with-windows-server-2016/

Measurements are different on different machines and basically there is no one size fits all it all depends on the workload or config or needs.

using the script from (by Mikael Nystrom, Microsoft MVP) on the basic disk not very impressive list  high latency but that’s the Standard storage.

imageimage

The premium Storage is way faster and constant. So when using Azure and you need an amount of load or VM’s there is so much choice if you pick a different machine the results can be better. when hitting the IOPS ceiling of the VM. Prepare some calculations when building your new solution.  Test some configurations first before you go in production.

Azure is changing everyday today this may be the best solution but outdated tomorrow.

Below are some useful links on the Machine type and storage type.

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/acu

 https://docs.microsoft.com/en-us/azure/virtual-machines/windows/standard-storage

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-memory#ds-series

 

Thanks for reading my blog. Did you check my other blog post about Azure File Sync : https://robertsmit.wordpress.com/2017/09/28/step-by-step-azure-file-sync-on-premises-file-servers-to-azure-files-storage-sync-service-afs-cloud-msignite/

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted November 9, 2017 by Robert Smit [MVP] in Windows Cluster, Windows Server 2016

Tagged with

Getting Started with #Azure Data Science Virtual Machine on Windows 2016 #DSVM #winserv #VSTS #DevOps   Leave a comment

 

The Data Science Virtual Machine (DSVM) is a ‘Windows Server 2016 with Containers’ VM & includes popular tools for data exploration, analysis, modeling & development.

Highlights:

  • Microsoft R Server – Dev. Ed. (Scalable R)
  • Anaconda Python
  • SQL Server 2017 Dev. Ed. – With In-Database R and Python analytics
  • Microsoft Office 365 ProPlus BYOL – Shared Computer Activation
  • Julia Pro + Juno Editor
  • Jupyter notebooks
  • Visual Studio Community Ed. + Python, R & node.js tools
  • Power BI Desktop
  • Deep learning tools e.g. Microsoft Cognitive Toolkit (CNTK 2.1), TensorFlow & mxnet
  • ML algorithm libraries e.g. xgboost, Vowpal Wabbit
  • Azure SDKs + libraries for various Azure Cloud offerings. Integration tools are included for: 
    1. Azure Machine Learning
    2. Azure Data Factory
    3. Stream Analytics
    4. SQL Data Warehouse
    5. Hadoop + Apache Spark (HDICluster)
    6. Data Lake
    7. Blob storage
    8. ML & Data Science tutorials as Jupyter notebooks

    Tools for ML model operationalization as web services in the cloud, using Azure ML or Microsoft R Server.

    Pre-configured and tested with Nvidia drivers, CUDA Toolkit, & NVIDIA cuDNN library for GPU workloads available if using NC class VM SKUs.

  •  

    Starting in the Azure Portal

    GO to New or +

    image

    Search for Data Science Virtual Machine (DSVM)

    image

    Select the {csp} Data Science Virtual Machine  – Windows 2016 option. 

    image

    Next fill in the username and password with resource group.

    image 

    Pick a machine type. When you pick a higher machine type when deploying every thing is way faster than just picking a Standard_A1 size.

    image

     

    As you can see there is a orange image mark in the text that the cost will be billed separately.

    Offer details

    Data Science Virtual Machine – Windows 2016

    0.0000 EUR/hr

    Good to know there are no cost and this is free. you need to pay for the Azure VM! in my case a E32s v3

    The highlighted Marketplace purchase(s) are not covered by your Azure credits, and will be billed separately.
    You cannot use your Azure monetary commitment funds or subscription credits for these purchases. You will be billed separately for marketplace purchases.

    image

    not bad 9 minute install with a long list of tools Office, visual studio , Visual studio Code,etc

    There is not a free license for the office and studio product but you can sign in with your credentials.

    image

    Thanks to the Big compute everything is running awesome.

    image

    As you can see all the tools are there, some needs a configuration so no default things that needs to be removed first just ready to start with out the long installation of all the tools.

    image

    What was missing on the Data Science Virtual Machine (DSVM) as it is a DevOps VM I installed the RSAT tools and project Honolulu single box for Azure management and development.

    https://robertsmit.wordpress.com/2017/09/25/projecthonolulu-the-new-future-of-windows-server-gui-management-servermgmt-smt-winserv/

     

    Follow Me on Twitter @ClusterMVP

    Follow My blog https://robertsmit.wordpress.com

    Linkedin Profile Http://nl.linkedin.com/in/robertsmit

    Google Me : https://www.google.nl

    Bing Me : http://tinyurl.com/j6ny39w

    LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

    Posted October 30, 2017 by Robert Smit [MVP] in Azure

    Tagged with ,

    Step by Step Azure File Sync – on-premises file servers to #Azure Files Storage Sync Service #AFS #Cloud #MSIgnite   4 comments

    Finally Azure File Sync is there in public preview, for the last months I had the pleasure to work with the Azure File Sync team and tested the product and thought about some great ideas where Azure File Sync (AFS) could be useful. And I guess you all have Ideas where you could use AFS. Placing your File server somewhere and get your files to the cloud.  Our use a Azure Data Box ADB https://azure.microsoft.com/nl-nl/updates/azure-data-box-preview/

    With Azure File Sync (preview), shares can be replicated on-premises or in Azure and accessed through SMB or NFS shares on Windows Server. Azure File Sync is useful for scenarios in which data needs to be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices.

    Azure File Sync (AFS)

    Azure File Sync is a multi-master sync solution, it makes it easy to solve global access problems introduced by having a single point of access on-premises, or in Azure by replicating data between Azure File shares and servers anywhere in the world. With Azure File Sync, we’ve introduced a very simple concept, the Sync Group, to help you manage the locations that should be kept in sync with each other. Every Sync Group has one cloud endpoint, which represents an Azure File share, and one or more server endpoints, which represents a path on a Windows Server. That’s it! Everything within a Sync Group will be automatically kept in sync!

      Azure File Sync enables organizations to:

      • Centralize file services in Azure storage
      • Cache data in multiple locations for fast, local performance
      • Eliminate local backup and DR

      The Azure File Sync agent is supported on Windows Server 2016 and Windows Server 2012 R2 and consists of three main components:

      • FileSyncSvc.exe: The background Windows service responsible for monitoring changes on Server Endpoints and initiating sync sessions to Azure.
      • StorageSync.sys: The Azure File Sync file system filter, responsible for tiering cold files to Azure Files (when cloud tiering is enabled).
      • PowerShell management cmdlets: PowerShell cmdlets for interacting with the Microsoft.StorageSync Azure Resource Provider. The cmdlets can be found at the following locations (by default):
    • %ProgramFiles%\Azure\StorageSyncAgent\StorageSync.Management.PowerShell.Cmdlets.dll
    • %ProgramFiles%\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll

    The Azure File Sync agent also includes a preview version of the Work Folders server feature which has been updated to support Azure File Sync. This preview version of Work Folders does not have a UI and must be managed via PowerShell: https://docs.microsoft.com/en-us/powershell/module/syncshare/?view=win10-ps

    But In the Preview I’m a bit Confused, what is the name of the product this Azure File Sync Or Storage Sync Service So looking it up in the Azure Store and in the quick list the name is not the Same.

    imageimage

    So when created the Azure File Sync <> you need to look under Storage Sync Services

    image

    Now that said how to built a Replica to Azure and back to my other Data Center ?

     

     Azure File Sync (AFS)

    So what do we need for this scenario, We need two File servers and a storage account in Azure.

    imageimage

    I created on a file server mvpafs01 with an extra disk that is hosted my onprem files. on the other server MVPAFS02 the share is in a different location.

    Azure File Sync extends on premises files servers into Azure providing cloud benefits while maintaining performance and compatibility.

    Azure File Sync provides:

    • Multi-site access – provide write access to the same data across Windows Servers and Azure Files
    • Cloud Tiering – store only recently accessed data on local servers
    • Integrates with Azure backup – no need to back up your data on premises
    • Rapid DR – restore file metadata immediately and recall data as needed

    Open your Azure subscription and look into the store for Azure File Sync.

    image

     

    image

    Create the Azure File Sync components

    imageAzure File Sync (AFS)

    First we make a New Storage Account, this storage account will hold the on premise files

    image

    image

    When the Storage account is created we create a file share on this storage account.

    image

    Currently the share has a maximum of 5TB !

    image

    Max size of a file share  5 TB

    Max size of a file in a file share 1 TB

    Max number of files in a file share Only limit is the 5 TB total capacity of the file share

    Max IOPS per share 1000

    Max number of files in a file share Only limit is the 5 TB total capacity of the file share

    image

    In this a limit of 4TB is more than enough to hold my files.

    image

    Now that the Azure File Sync is created we can configure the Azure File Sync.

    First we create a sync group in this group we can sync the files from one to many.

    image

    If you didn’t create the Storage account and the File share you will need to create this first.

    Create a sync Group

    A Sync Group contains a list of endpoints that define where a set of files sync to. Servers and Azure File Shares can participate in syncing the same set of files when they are listed in the same Sync Group.

    At the moment only one Azure File Share can participate in a Sync Group and it must be in the same region as this Storage Sync Service. Below you can create the Sync Group and its first and only Cloud Endpoint in one step. In the future you will be able to add more Cloud Endpoints. You can add Server Endpoints after this step completes.

    After creating this Sync Group and its first Cloud Endpoint, the next step is adding one or more Server Endpoints to the Sync Group.

     

    Azure File Sync (AFS)

    Next step is preparing the on premise file server and install the Agent and add the Azure PowerShell modules.

    To register a server:

    • Download the Azure Storage Sync agent and install it on all servers you want to sync.
    • After finishing the agent install, use the server registration utility that opens to register the server to this Storage Sync Service.

     

    image image

    When finishing the download of the right files we start the installation of the Agent.

    1. Download and run the StorageSyncAgent.msi.
    2. Follow the instructions to complete the installation.
    3. At the conclusion of the Azure File Sync agent installation, the Server Registration UI will auto-start.
    4. Follow the instructions to register the server with your Storage Sync Service.

    Before we start the Agent we need to disable the enhanced security ( for admins only)

     

    image

    The installation of the Agent is simple and Quick unless the Azure Modules are not on the Server.

    Azure File Sync (AFS)Azure File Sync (AFS)Azure File Sync (AFS)Azure File Sync (AFS)imageAzure File Sync (AFS)Azure File Sync (AFS) image

    Now that the Agent is installed we can register this server in Azure File Sync (AFS)

    Azure File Sync (AFS)

    I did not have the Azure PowerShell modules on this server So I need to install the modules first

    https://go.microsoft.com/fwlink/?linkid=856959

    image

    You can check the version with the Powershell command lets

    Get-Module PowerShellGet -list | Select-Object Name,Version,Path

    # Install the Azure Resource Manager modules from the PowerShell Gallery

    Install-Module AzureRM

    imageimage

    This can take sometime but you don’t need a reboot for this.

    image

    just login to your azure subscription where the Azure File Sync (AFS) is installed

    imageimage

    Pick the right subscription and Resource Group with the Storage Sync Service.

    image

    The next step after the registration of the server is creating an endpoint this End point is linking the File share to the Sync service

    image

     

    Creating an Endpoint is the final step but remember as soon as this is in place the Sync services on the on premise server starts the initial sync!

    image

    Creating the Azure File Sync (AFS) Endpoint

    image

    A Server Endpoint integrates a subfolder of a volume from a Registered Server as a location to sync. The following considerations apply:

    • Servers must be registered to the Storage Sync Service that contains this Sync Group before you can add a location on them here.
    • A specific location on the server can only sync with one Sync Group. Syncing the same location or even a part of it – with a different Sync Group doesn’t work.
    • Make sure that the path you specify for this server is correct and not the root of a volume before hitting Create.

    image

    • Cloud Tiering: A switch to enable or disable cloud tiering, which enables infrequently used or accessed files to be tiered to Azure Files.
    • Volume Free Space: the amount of free space to reserve on the volume on which the Server Endpoint resides. For example, if the Volume Free Space is set to 50% on a volume with a single Server Endpoint, roughly half the amount of data will be tiered to Azure Files. Note that regardless of whether cloud tiering is enabled, your Azure File share always has a complete copy of the data in the Sync Group.

    image

    Data traffic on the File server in this case it is just with one CPU. The upload speed is around the 300Mbps with almost 100% CPU

    imageimage

    After checking the same upload with 4 Cores and the upload is more than doubled so keep this in mind when uploading the files. Unless your line is the throttle neck

    imageimage

    Perfect the files are synced and ready for cloud usage.

    But I also want these files in my other datacenter, I could just copy those files and in a few days I run robocopy with the delta’s but I can also use a second endpoint in Azure File Sync (AFS) and keep all files in sync.

    The first step is the same as any server to register install the Azure File Sync (AFS)  Agent with the Powershell Modules

     

    image

    Connect with the same Azure subscription

    image

    As you can see the server is online and registered.

     

    image

    As this server doesn’t have a second disk I place all the files on a different share

    image

    But after filling in the share name and applied it the server gets very busy but there are no files in the folder.

    Check this : all the files are cached in the System volume information folder under HFS. After the caching it placed all the files in the right folder.

    Just keep in mind that this is the process and your Monitoring agents could alarm you for this. 

    image

    After the initial sync I have two file servers and a Azure Storage account with the same files. I can Edit files on 3 point and still it got synced.

    image

    The synced files on the Second server and as you can see that the System files are gone and placed in the share.

    image

    Hope this blog gives you the start on using the Azure File Sync (AFS) it is very useful as you could sync file between subscriptions or regions or just between your data centers.

     

    Follow Me on Twitter @ClusterMVP

    Follow My blog https://robertsmit.wordpress.com

    Linkedin Profile Http://nl.linkedin.com/in/robertsmit

    Google Me : https://www.google.nl

    Bing Me : http://tinyurl.com/j6ny39w

    LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

    Posted September 28, 2017 by Robert Smit [MVP] in Azure

    Tagged with , , ,

  • Twitter

  • %d bloggers like this: