Microsoft Tech Summit: Amsterdam #Community #Hybridcloud #ASR #Honolulu #Azure #Cloud #HCA Community #MSTechSummit @MSTCommunity #MvpBuzz   Leave a comment

Last week I was helping on the Microsoft Tech Summit Amsterdam. The event took place at RAI Amsterdam from March 28-29, 2018. There where a lot of MVP’s at the Booth and Ask Microsoft Anything #AMA See some impressions here. The Keynote was done by Tad Brockway https://twitter.com/tadbrockway.

IMG_20180328_212911IMG_20180328_164707IMG_20180328_202801

The Tech Summit was a great place to meet and greet also for the Experts it was great to meet Jeff Woolsey again. In the the picture above to getter with James van den berg  https://mountainss.wordpress.com/

image

 

Some key features where Windows Server 2019 ,Azure Site Recovery, Deduplication and Project Honolulu an amazing free tool to manage your server environment. 

Introducing Windows Server 2019 – now available in preview https://cloudblogs.microsoft.com/windowsserver/2018/03/20/introducing-windows-server-2019-now-available-in-preview/

Windows Server 2019 is built on the strong foundation of Windows Server 2016 – which continues to see great momentum in customer adoption. Windows Server 2016 is the fastest adopted version of Windows Server, ever! We’ve been busy since its launch at Ignite 2016 drawing insights from your feedback and product telemetry to make this release even better https://robertsmit.wordpress.com/tag/azure-site-recovery/

The Technical Preview of Project Honolulu – our reimagined experience for management of Windows and Windows Server. Project Honolulu is a flexible, lightweight browser-based locally-deployed platform and a solution for management scenarios. Project Honolulu: The New Future Of Windows Server GUI Management

https://blogs.msdn.microsoft.com/mvpawardprogram/2017/10/27/friday-five-october-27/

Clustering FileServer Data Deduplication on Windows 2019 Step by Step #sofs #winserv #ReFS #WindowsServer2016 #Dedupe

https://robertsmit.wordpress.com/2018/02/21/clustering-fileserver-data-deduplication-on-windows-2016-step-by-step-sofs-winserv-refs-windowsserver2016-dedupe/

Sign up for the Insiders program to access Windows Server 2019

We know you probably cannot wait to get your hands on the next release, and the good news is that the preview build is available today to Windows Insiders. Join the program to ensure you have access to the bits. For more details on this preview build, check out the Release Notes.

We love hearing from you, so don’t forget to provide feedback using the Windows Feedback Hub app, or the Windows Server space in the Tech community.

image

Go register on the Community there is great content an interaction with other Community members

https://techcommunity.microsoft.com/t5/Microsoft-Tech-Summit-Content-17/bd-p/MSTechSummitContent

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Advertisements

Posted April 2, 2018 by Robert Smit [MVP] in Event

Tagged with ,

Microsoft Tech Summit Amsterdam #MSTechSummit   Leave a comment

Today and tomorrow I’ll be at the Microsoft Tech Summit.  You can find me at the Workshop Proctor "Build and manage your applications on Azure" or at the  Microsoft Booth or somewhere at the Expert Hub Center the help the Microsoft Tech Community

To help the Community / visitors Build your skills with the latest in cloud technologies at a free, technical learning event for IT professionals and developers, coming to a city near you. The Tech Summit is hitting the road with their top engineers to bring you two days of in-depth sessions, networking opportunities, industry insights, and hands-on skill-building with the experts behind Microsoft’s cloud services.

The cloud is changing expectations and transforming the way we live and work. Whether you’re developing innovative apps or delivering optimized solutions, Microsoft Tech Summit can help you evolve your skills, deepen your expertise, and grow your career.

Discover the latest trends, tools, and product roadmaps at more than 70 sessions, covering a range of topics across Microsoft Azure and Microsoft 365, which includes Windows 10, Office 365, and Enterprise Mobility + Security. From beginner sessions that will help you develop core cloud skills, to advanced, 400-level training that will take your expertise to the next level, there is something for everyone.

image

This year we will have two 60-minute keynotes focusing on Microsoft 365 and Azure. This will enable our keynote presenters to focus deeply on their areas of expertise and will include customers on stage and demos.

New this year to Microsoft Tech Summit is The Hub Expert Center where attendees will have the opportunity to connect with Microsoft SME’s during Day 1 and 2. It is an excellent opportunity to connect, gather lead retrievals and engage with potential customers.

Ask the Experts: We will hold an Ask the Experts Networking Hour on Day 1 from 17:45 – 18:45. All Speakers and Staff are required to attend this event. New this year are two 30-minute panels hosted by Microsoft SME and MVP’s. Additionally, attendees will be able to interact and learn from industry peers and representatives from Microsoft. Expert table topics will be as follows, and Speakers are required to self-staff these areas: Business Applications, Data and AI, Cloud Infrastructure, App Development, Internet of Things, Modern Workplace and Microsoft 365

 

on the Microsoft Tech Community at:

Visit aka.ms/ts/amsterdam

Sign in with your Microsoft or LinkedIn account and select ‘Evaluations’ to submit your feedback after sessions

image

Posted March 28, 2018 by Robert Smit [MVP] in Event

Tagged with ,

How to Backup Azure file shares with #AzureBackup #ASR #AFSB #Azure   Leave a comment

Backup for Azure file shares is something that is a feature that we all want. Azure Files is a cloud-first file share solution with support for industry standard SMB protocol. Azure Backup enables a native backup solution for Azure file shares, a key addition to the feature arsenal to enable enterprise adoption of Azure Files. Using Azure Backup, via Recovery Services vault, to protect your file shares is a straightforward way to secure your files and be assured that you can go back in time instantly.

If you want to read my old blogs about Azure backup https://robertsmit.wordpress.com/tag/azure-backup/

Below is a schematic on how the Backup for Azure File Shares Works.

Backup for Azure File Shares

Key features

  • Discover unprotected file shares: Utilize the Recovery Services vault to discover all unprotected storage accounts and file shares within them.
  • Backup multiple files at a time: You can back up at scale by selecting multiple file shares in a storage account and apply a common policy over them.
  • Schedule and forget: Apply a Backup policy to automatically schedule backups for your file shares. You can schedule backups at a time of your choice and specify the desired retention period. Azure Backup takes care of pruning these backups once they expire.
  • Instant restore: Since Azure Backup utilizes file share snapshots, you can restore just the files you need instantly even from large file shares.
  • Browse individual files/folders: Azure Backup lets you browse the restore points of your file shares directly in the Azure portal so that you can pick and restore only the necessary files and folders.

How to start with the Azure File share backup

First we make a backup vault that holds all the backups.

image

In the Azure Recovery services Vault I created a new vault that holds my file share backup.

Doing this with powershell :

$vaultname="Azure-Fileshare-Vault02"
$rsgroup="AFS-BV-02"
$Location="West US"

Get-AzureRmRecoveryServicesVault
New-AzureRmResourceGroup -Name $rsgroup -Location $Location
New-AzureRmRecoveryServicesVault -Name $vaultname -ResourceGroupName $rsgroup -Location $Location

Now we open the just created backup vault and add a Backup job

image

Adding the Azure Backup job

image

As you can see the new Azure FileShare option is there.  If you want to do this with Powershell keep in mind that you will need the latest updates and as this is a preview it might change in the next version as currently there is only the -WorkloadType "AzureVM" option there.

image

Now we select the storage account that holds the file share.

image

It could take some time for the validation.

image

Now that the file share is selected, we can make a backup policy. Or use one that you already created.

image

After establishing a backup policy, a snapshot of the File Shares will be taken at the scheduled time, and the recovery point is retained for the chosen period.

image

Then finally we enable the backup. There will be a initial backup created.

image

When you check the backup jobs in you backup vault you can see the just created file share backup.

image

Just wait for the first backup or go to the job an right click and do a backup now.

image

You can also create an ondemand backup or stop the backup. 

image

With the backup now you can force to backup the FileShare.

image

If you double click the backup item and go to …more you can Stop the backup or even delete the backup.

Azure File Share Restore

image

Well the Azure FileShare Restore is easy, Pick restore in the menu and pick a restore point.

image

You can pick the original location but an alternate location can also be used. This is a great option on selecting the files or place the restored files on a different locations to sort out the files first.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Bing  : Find me on Bing Robert Smit

LMGTFY : Find me on google Robert Smit

Posted February 27, 2018 by Robert Smit [MVP] in Azure Site Recovery

Tagged with ,

Clustering FileServer Data Deduplication on Windows 2016 Step by Step #sofs #winserv #ReFS #WindowsServer2016 #Dedupe   4 comments

Building a File server in Server 2016 isn’t that different tan in Server 2012R2 except there are different options, ReFS, DeDupe and a lot more options. As we start with the basic file server clustered and using ReFS and Data Duplication. This is a common scenario and can also be used in Azure.

Data Deduplication can effectively minimize the costs of a server application’s data consumption by reducing the amount of disk space consumed by redundant data. Before enabling deduplication, it is important that you understand the characteristics of your workload to ensure that you get the maximum performance out of your storage.

In this demo I have a two node cluster a quick create of the cluster. This is a demo for file services.

Create Sample Cluster :

#installing the File server and cluster features

Get-WindowsFeature Failover-Clustering
install-WindowsFeature "Failover-Clustering","RSAT-Clustering" -IncludeAllSubFeature
Restart-Computer –Computername Astack16n014,Astack16n015 –force
 
#Create cluster validation report
Test-Cluster -Node Astack16n014,Astack16n015
 
#Create cluster
New-Cluster -Name Astack16R5 -Node Astack16n014,Astack16n015 -NoStorage -StaticAddress "10.255.255.41"

 

image

Now that the Cluster is in place we can start with the basic of the file cluster, the disks need to be sharable so no local disks.

If you want to build a file server with local disk only then we should use storage spaces direct, I’ll use this in the next blog post.

We add a shared disk to the cluster. Enable the disk and format the disk.

imageimage

I format the disk with ReFS as this is the next file structure and has more options than NTFS.

The next iteration of ReFS provides support for large-scale storage deployments with diverse workloads, delivering reliability, resiliency, and scalability for your data. ReFS introduces the following improvements:
  • ReFS implements new storage tiers functionality, helping deliver faster performance and increased storage capacity. This new functionality enables:
    • Multiple resiliency types on the same virtual disk (using mirroring in the performance tier and parity in the capacity tier, for example).
    • Increased responsiveness to drifting working sets.
    • Support for SMR (Shingled Magnetic Recording) media.
  • The introduction of block cloning substantially improves the performance of VM operations, such as .vhdx checkpoint merge operations.
  • The new ReFS scan tool enables the recovery of leaked storage and helps salvage data from critical corruptions.

image

The disk is formatted and added to the cluster,showing as Available Storage.

image

Our next step would be Adding the File server role to the cluster.

image

image

The question here is is this a normal file server or do you want to build a sofs cluster. Currently SOFS is only supported for RDS UPD,Hyper-v,SQL. Comparing both SOFS and a file server.

SOFS = Active – Active File share

Fileserver = Active – Passive File share

We are sing the file server for general usage.

image 

Give your file server a name. Remember this is the netbios name and needs to be in the DNS!

imageimage

Default is a DHCP IP but I assume you will set this to fixed or make this static in the DHCP & DNS

image

Now that the file server and the disk is added to the cluster we can start the file Server and add some shares to this

add the file share.

image

image

When adding the file share we see this error “ client access point is not ready to be used for share creation”

This is a brand new File Server and already broken ? well no reading this error message it said we can’t access the netbios name

image

We we do properties on the file server you can see there is a DNS failure. It can’t add the server to the DNS or the registration is not correct.

Just make sure the name is in the DNS and a nslookup works.

image

When adding the file share you get a couple off options, and lets pick the SMB share Quick option

image

Get the file share location, this would be on the shared disk in the cluster. if there are no folders make the folder first.

imageimage

I Give the folder a name and put this to the right disk.

image

Here you can pick a couple of options and some are already tagged. I this case I only use access-based enumeration.

imageimage

The file server is ready. clients can connect. Access ACL must be set but this depends on the environment.

Our next step is enable Data Deduplication on this share. It is a new option in Server 2016. Want to know what is new in Windows Server 2016 https://docs.microsoft.com/en-us/windows-server/storage/whats-new-in-storage

Data Deduplication

Install Data Deduplication every node in the cluster must have the Data Deduplication server role installed.

To install Data Deduplication, run the following PowerShell command as an administrator:

Install-WindowsFeature -Name FS-Data-Deduplication

image

  • Recommended workloads that have been proven to have both datasets that benefit highly from deduplication and have resource consumption patterns that are compatible with Data Deduplication’s post-processing model. We recommend that you always enable Data Deduplication on these workloads:
    • General purpose file servers (GPFS) serving shares such as team shares, user home folders, work folders, and software development shares.
    • Virtualized desktop infrastructure (VDI) servers.
    • Virtualized backup applications, such as Microsoft Data Protection Manager (DPM).
  • Workloads that might benefit from deduplication, but aren’t always good candidates for deduplication. For example, the following workloads could work well with deduplication, but you should evaluate the benefits of deduplication first:
    • General purpose Hyper-V hosts
    • SQL servers
    • Line-of-business (LOB) servers
Before enabling the Data Deduplication we can first check and see if there any savings are by doing this.

Run this in a Command or powershell command where e:\data is or data location that we are using for the dedupe

C:\Windows\System32\DDPEval.exe e:\data

image

Even with a few files there is a saving.

get-volume -DriveLetter e

image

To enable the dedupe go to server manager , volumes and select the disk that need to be enabled.

image

Selecting the volume that needs Dedupe other volumes won’t be affected. It’s important to note that you can’t run data deduplication on boot or system volumes

imageimageimage

The setting of the # days can be changed in to something what suite you.

image

When enabling Deduplication, you need to set a schedule, and you can see above that you can set two different time periods, the weekdays and weekends and you can also enable background optimization to run during quieter periods, and for the rest it is all powershell there is no gui on this.

Get-Command -Module Deduplication will list all the powershell commands

image

Measure-DedupFileMetadata -Path e:\data

image

I places some of the same ISO files on the volume and as you can see there is a storage saving.

get get the data run an update on the dedupe status.

Update-DedupStatus -Volume e:

image

image

It is all easy to use and to maintain. If you have any cluster questions just go to https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverClustering and I’m happy to help you there and also other community or microsoft guys are there.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Bing  : Find me on Bing Robert Smit

LMGTFY : Find me on google Robert Smit

Posted February 21, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with

Part2 Ultimate Step to Remote Desktop Services HTML5 QuickStart Deployment #RDS #VDI #RDP #RDmi   Leave a comment

Ready for Part 2 of the RDS setup.  As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

Then I did the Part 1  Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi https://robertsmit.wordpress.com/2018/01/15/part1-ultimate-s…s2d-vdi-rdp-rdmi/

Where I decided I do a blog on how to build my perfect RDS environment and yes it always depends but some components are just there to use in Azure. I did cover all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

As Microsoft Ignite is behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there (see Channel 9 https://channel9.msdn.com/Shows/OEMTV/OEMTV1760 ). Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. and usage of the HTML5 client. On your own build you can have those benefits also.

Building the RDS on Premise is not multi domain It all needs to be in one domain.  But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

The basic Remote Desktop Services with HTML5 I build is below. in Part 1

image_thumb6

When you don’t have the right performance in your RDS host and you are running this in Azure like me you can always change the RDS host size. Currently I use all BxMs machines Good for making Blog posts and save some costs. and running this with minimal load it performs well.

image

We have the RDS farm in place and we added the HTML5 client – the Bits are for preview users only there for there is not a dive deep yet on the installation.

But the HTML5 client is the same as on the Remote desktop services modern infrastructure the only difference is that you are using your own RDS setup just the way you always did in server 2016 (see part1)

HTML5

Now that the RDS site is up and running, we can take a look at the new HTML5 client. Running this combined with the default RDS page makes it easy to test.

The usage is a bit different but I must say it is fast and instead of multiple windows open it all opens in just one tab with sub icons. in the browser.

image_thumb[3]

As you can see a lot of sub icons in the bar but there is only one tab open. In this case there is more offloading to the RDS host. With using less local compute power.

Remote Desktop Services HTML5

So you can use less heavy clients and work faster & better

Remote Desktop Services HTML5

All the Explorers are combined to one single icon. (Everything is running in the back ground)

Remote Desktop Services HTML5

All the applications that started more than once are combined in the upper bar

So Connection is made on just the same method.

image_thumb

 

imageimage

the web client is added to the RDS site and if you want to make this page default you can easy change this.

image

In the HTTP redirect use the webclient.

Remote Desktop Services HTML5

A nice option is that publishing the RDP client it opens also in the Tab and Checking the Memory usage.

image_thumb[22]image_thumb[23]

It is less than expected, this is on the client. and still We have some applications open.

Remote Desktop Services HTML5

On the back ground (RDS server) you can see all the processes are there. And running the 32 bit Internet explorer eating memory.

image_thumb[25] image_thumb[26]

Above the task manager of the RDS host the first is the HTML5 usage and the second is the default RDS usage.

below all the icons on the taskbar instead of one browser tab.

Remote Desktop Services HTML5

See the load on the local machine based on the above workload.

 

image_thumb[30]

That is all for now In the next part I’ll show you more on deployment and the RD modern Infrastructure.

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 17, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Part1 Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi   Leave a comment

As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp

https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

that’s Covering all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

I started this post a while ago and thinking about the best configuration but every time there is a little thing well maybe this isn’t the best. With that in mind I started to this blog post at least 6 times. And the best configuration is always “it depends” there are so many options and it is hard to say one size fits all it never is.

As Microsoft Ignite is just behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there. Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. Building the RDS on Premise is not multi domain It all needs to be in one domain.

RDmi (RDS modern infrastructure)

But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

But if you can make use of the Azure Security Center and point the Webroles to the Azure Application Proxy.

RDmi (RDS modern infrastructure)

As there is no default firewall I used a AAD application Proxy to access the Remote desktop Gateway website.

RDmi (RDS modern infrastructure) 

The Configuration is not that hard and well documented on the Microsoft Doc site : https://docs.microsoft.com/en-us/azure/active-directory/application-proxy-publish-remote-desktop

In this Blog Item I want to place the RDS basics to the next level, as everybody can install a next next finish installation, but this should be a next step. There is no need for slow performance with the right configuration.

I’m using Core only except for the Session hosts or on servers that its not handy. Separated the Web Roles ,Gateway & Connection Brokers and all is High available. And in front a Traffic manager that determine what Webserver is near you. But this is “only needed” if you use multiple regions or want to separate the traffic.. The Use Profile disk will be hosted with a 3 node Storage Space Direct Cluster As I think a 3th or even a 4th node will give you more benefit of the Storage and Uptime. But this is also a “depends” you can even use a Single file server (non redundant) in this case the UPD are redundant and say I want 3 TB disk space for the UPD.  I did some performance testing and the results are below.

RDmi (RDS modern infrastructure)

with the Premium disk we got a good amount of performance.  As I’m using SMB3 storage I will also add a second nic to all my RDS hosts  for the SMB3 storage. This will take some extra steps to get the right performance.

As you could also go for a single file server with a lot off disk, It saves money as there is only one server and onetime the disks, but there is no redundancy for the UPD. But in this case the backup is easier. If you can handle the downtime and make the UPD that way that it is less important. then this is a nice option.

If you build this in Azure you must be aware that even Azure is not AlwaysOn. Therefor we need to make sure the RDS site is always Up. And again this seems to be a lot of servers and maybe you don’t want all this and want to have only one frontend server and one RD Session host it is all up to you but I think the Holy Grail is between this and a Single server.

In this case I use Powershell for the deployment.  And I deploy all VM’s from a template that way I know that All VM’s are the same in this configuration.

image

First I setup Traffic Manager this is an easy setup and based on performance. I deployed all the VM’s in azure with a Powershell script.

As all new machines are added to the server manager we can use the to add to the farm.

RDmi (RDS modern infrastructure)

When adding the machines just do one gateway and one Connection broker. then configure the RD connection broker HA Database

image

For the Connection broker Database I use a Database as an Service in Azure.

image

Just create the Database and use the connection string in the RDS farm

RDmi (RDS modern infrastructure)

On the Connection brokers you will need the Native SQL client.

https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server

https://www.microsoft.com/en-us/download/details.aspx?id=50402

Now that the Database is Connected we can add all the other servers and add the certificate.

RDmi (RDS modern infrastructure)

The Used String looks like :

Driver={ODBC Driver 13 for SQL Server};Server=tcp:mvpserver.database.windows.net,1433;Database=rdsbd01;Uid=admin@mvpserver;Pwd={your_password_here};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;

image

The SQL native client is required for the connection on all Connection brokers! 

image

Now that the connection High available mode is configured we can add another connection broker

image

 

image

Now that the connection broker is redundant we start adding some web servers

First we are adding the Web role to the new core webservers.

RDmi (RDS modern infrastructure)

Adding the Servers could take some time. Just as the Webserver we add extra connection brokers and Gateway servers. Same method.

RDmi (RDS modern infrastructure)

Even if the servers don’t need a reboot I reboot them anyway just to make sure my config is working.

RDmi (RDS modern infrastructure)

The Same we do with the Gateway role and the Connection broker. Now that all roles are added we can do some configuration.

As we already placed the RDS Database to a Azure we need to apply the Certificate to all the servers in the Farm (Web/gateway/RDCB)

RDmi (RDS modern infrastructure)

In this Configuration I use a Azure load balancing option this is Free and easy to use. I will use 3 Azure Load balancing configurations in this.

Two internal and one Public. The public gets an external IP.

image

The important setting here is the Load balancer type Public or internal

Azure Load Balancer can be configured to:

  • Load balance incoming Internet traffic to virtual machines. This configuration is known as Internet-facing load balancing.
  • Load balance traffic between virtual machines in a virtual network, between virtual machines in cloud services, or between on-premises computers and virtual machines in a cross-premises virtual network. This configuration is known as internal load balancing.
  • Forward external traffic to a specific virtual machine.

All resources in the cloud need a public IP address to be reachable from the Internet. The cloud infrastructure in Azure uses non-routable IP addresses for its resources. Azure uses network address translation (NAT) with public IP addresses to communicate to the Internet.

Building the VM’s We keep them in the same availability set as described below.

image

Update Domains

For a given availability set, five non-user-configurable update domains are assigned by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on.

Fault Domain

Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.

When creating the availability groups we are using the Managed disks and we always can change the VM Size and or disk type. That is the flexible use of Azure.

 image

If your VM(s) are deployed using the Resource Manager (ARM) deployment model and you need to change to a size which requires different hardware then you can resize VMs by first stopping your VM, selecting a new VM size and then restarting the VM. If the VM you wish to resize is part of an availability set, then you must stop all VMs in the availability set before changing the size of any VM in the availability set. The reason all VMs in the availability set must be stopped before performing the resize operation to a size that requires different hardware is that all running VMs in the availability set must be using the same physical hardware cluster. Therefore, if a change of physical hardware cluster is required to change the VM size then all VMs must be first stopped and then restarted one-by-one to a different physical hardware clusters.

image

As changing the disk type to premium we can also adjust the disk size to get more local IOPS. But the cost will get up !!

Simple and scalable VM deployment
Managed Disks handles storage for you behind the scenes. Previously, you had to create storage accounts to hold the disks (VHD files) for your Azure VMs. When scaling up, you had to make sure you created additional storage accounts so you didn’t exceed the IOPS limit for storage with any of your disks. With Managed Disks handling storage, you are no longer limited by the storage account limits (such as 20,000 IOPS / account). You also no longer have to copy your custom images (VHD files) to multiple storage accounts. You can manage them in a central location – one storage account per Azure region – and use them to create hundreds of VMs in a subscription.

Now that we have several RDS host deployed we can add them to the Farm.

Adding RDS host. Is just the seam as adding the Gateway servers or Connection brokers.

image

Now that the basics are installed We can do some configuring.

Building the UPD share you can use the blog post for Storage spaces with the SOFS  : https://robertsmit.wordpress.com/2015/05/12/windows-server-2016-with-storage-spaces-direct-building-sofs-with-storage-spaces-direct-winserv-win2016-s2d-howtopics/

But keep in mind that there is no one size fits all. Calculate how big your storage must be and do not size the total on your top users but on average usage.

Azure VM sizing is also not just pick one, a lot off new sizes are there and pick the one that you need. High performance or memory optimized does not mean you can only use that VM for that role. checkout the specs and test your vm. I think the B Sizes are promising and cheap for a lot off roles.

Check this site for your Azure VM https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

If you want a regular share then use the file server or just share a folder and use this in the RDS. but remember users are reading and writing to this share it will use bandwidth and IOPS on the disk.

Setting the File share can only be done once per RDS collection. As shown below. Create a collection and user your share for the User profile disk to land.

image

 

image

If you want to change the UPD size it can only be done in PowerShell . Also the file share Setting and changing the URL of the Gateway can only be done with powershell after the first config.

Set-RDSessionCollectionConfiguration -CollectionName Collection -MaxUserProfileDiskSizeGB 40

image

Now that everything is in place we launch the RDS webpage. As I modified the page. Just make a modified page and save it somewhere and after a new deployment Copy past it in the C:\Windows\Web\RDWeb.

image

So the page can be with or with out “ public computer”

imageimage

image

Now that the Gateway ,Connection Broker and the RDS hosts are in place we can open the web frontend. As mentioned above I customized the page a bit. (save your modifications on a save place for the next deployment)

That’s all for part 1

In the next part I’m showing you a quick overview of the HTML5 client.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 15, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Check with Powershell for Meltdown and Spectre #exploit critical vulnerabilities Protection #Meltdown #Spectre #KB4056892   1 comment

Meltdown and Spectre exploit critical vulnerabilities in modern processors. These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.

Edit:5-1-2018

Meltdown is Intel-only and takes advantage of a privilege escalation flaw allowing kernel memory access from user space, meaning any secret a computer is protecting (even in the kernel) is available to any user able to execute code on the system.

Spectre applies to Intel, ARM, and AMD processors and works by tricking processors into executing instructions they should not have been able to, granting access to sensitive information in other applications’ memory space.

Meltdown work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.

image

Microsoft is aware of a new publicly disclosed class of vulnerabilities referred to as “speculative execution side-channel attacks” that affects many modern processors and operating systems including Intel, AMD, and ARM. Note: this issue will affect other systems such as Android, Chrome, iOS, MacOS, so we advise customers to seek out guidance from those vendors.

Microsoft has released several updates to help mitigate these vulnerabilities. We have also taken action to secure our cloud services. See the following sections for more details.

Microsoft has not received any information to indicate that these vulnerabilities have been used to attack customers at this time. Microsoft continues to work closely with industry partners including chip makers, hardware OEMs, and app vendors to protect customers. To get all available protections, hardware/firmware and software updates are required. This includes microcode from device OEMs and in some cases updates to AV software as well.

The following sections will help you identify and mitigate client environments affected by the vulnerabilities identified in Microsoft Security Advisory ADV180002.

The Windows updates will also provide Internet Explorer and Edge mitigations. We will also continue to improve these mitigations against this class of vulnerabilities.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

Install the powershell module from the Gallery.

image

Install-Module SpeculationControl

image

With  Get-SpeculationControlSettings you can check your settings

image

As my system is not protected, but after all the fixes it should be like this below.

image

But you need to do more than just a software patch.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

checking the BIOS of you machine with

get-wmiobject win32_bios

image

image

As there is no later Bios from my system, I’m out off luck.  good moment to renew my test machine.

SO I need to patch my system, As I’m a windows insider I run several versions of windows. First check there was KB4056890 but this is already updated to KB4056892 make sure you get the latest version of the patch. you don’t want to patch and reboot the machine twice.

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056890

image

The Updated version!

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

In this case I installed the KB4056890 Update installation may stop at 99% and may show elevated CPU there is a fix for that read this :

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

 

image

You need a reboot for this fix.

image

Remember this is not just a Microsoft Windows thing if you are on Citrix,Xenserver,Amazon or VMWare You need to check your hardware.

https://blogs.vmware.com/security/2018/01/vmsa-2018-0002.html

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 4, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with

  • Twitter

  • Advertisements
    %d bloggers like this: