Archive for the ‘Windows Server 2016’ Category

Clustering FileServer Data Deduplication on Windows 2016 Step by Step #sofs #winserv #ReFS #WindowsServer2016 #Dedupe   4 comments

Building a File server in Server 2016 isn’t that different tan in Server 2012R2 except there are different options, ReFS, DeDupe and a lot more options. As we start with the basic file server clustered and using ReFS and Data Duplication. This is a common scenario and can also be used in Azure.

Data Deduplication can effectively minimize the costs of a server application’s data consumption by reducing the amount of disk space consumed by redundant data. Before enabling deduplication, it is important that you understand the characteristics of your workload to ensure that you get the maximum performance out of your storage.

In this demo I have a two node cluster a quick create of the cluster. This is a demo for file services.

Create Sample Cluster :

#installing the File server and cluster features

Get-WindowsFeature Failover-Clustering
install-WindowsFeature "Failover-Clustering","RSAT-Clustering" -IncludeAllSubFeature
Restart-Computer –Computername Astack16n014,Astack16n015 –force
 
#Create cluster validation report
Test-Cluster -Node Astack16n014,Astack16n015
 
#Create cluster
New-Cluster -Name Astack16R5 -Node Astack16n014,Astack16n015 -NoStorage -StaticAddress "10.255.255.41"

 

image

Now that the Cluster is in place we can start with the basic of the file cluster, the disks need to be sharable so no local disks.

If you want to build a file server with local disk only then we should use storage spaces direct, I’ll use this in the next blog post.

We add a shared disk to the cluster. Enable the disk and format the disk.

imageimage

I format the disk with ReFS as this is the next file structure and has more options than NTFS.

The next iteration of ReFS provides support for large-scale storage deployments with diverse workloads, delivering reliability, resiliency, and scalability for your data. ReFS introduces the following improvements:
  • ReFS implements new storage tiers functionality, helping deliver faster performance and increased storage capacity. This new functionality enables:
    • Multiple resiliency types on the same virtual disk (using mirroring in the performance tier and parity in the capacity tier, for example).
    • Increased responsiveness to drifting working sets.
    • Support for SMR (Shingled Magnetic Recording) media.
  • The introduction of block cloning substantially improves the performance of VM operations, such as .vhdx checkpoint merge operations.
  • The new ReFS scan tool enables the recovery of leaked storage and helps salvage data from critical corruptions.

image

The disk is formatted and added to the cluster,showing as Available Storage.

image

Our next step would be Adding the File server role to the cluster.

image

image

The question here is is this a normal file server or do you want to build a sofs cluster. Currently SOFS is only supported for RDS UPD,Hyper-v,SQL. Comparing both SOFS and a file server.

SOFS = Active – Active File share

Fileserver = Active – Passive File share

We are sing the file server for general usage.

image 

Give your file server a name. Remember this is the netbios name and needs to be in the DNS!

imageimage

Default is a DHCP IP but I assume you will set this to fixed or make this static in the DHCP & DNS

image

Now that the file server and the disk is added to the cluster we can start the file Server and add some shares to this

add the file share.

image

image

When adding the file share we see this error “ client access point is not ready to be used for share creation”

This is a brand new File Server and already broken ? well no reading this error message it said we can’t access the netbios name

image

We we do properties on the file server you can see there is a DNS failure. It can’t add the server to the DNS or the registration is not correct.

Just make sure the name is in the DNS and a nslookup works.

image

When adding the file share you get a couple off options, and lets pick the SMB share Quick option

image

Get the file share location, this would be on the shared disk in the cluster. if there are no folders make the folder first.

imageimage

I Give the folder a name and put this to the right disk.

image

Here you can pick a couple of options and some are already tagged. I this case I only use access-based enumeration.

imageimage

The file server is ready. clients can connect. Access ACL must be set but this depends on the environment.

Our next step is enable Data Deduplication on this share. It is a new option in Server 2016. Want to know what is new in Windows Server 2016 https://docs.microsoft.com/en-us/windows-server/storage/whats-new-in-storage

Data Deduplication

Install Data Deduplication every node in the cluster must have the Data Deduplication server role installed.

To install Data Deduplication, run the following PowerShell command as an administrator:

Install-WindowsFeature -Name FS-Data-Deduplication

image

  • Recommended workloads that have been proven to have both datasets that benefit highly from deduplication and have resource consumption patterns that are compatible with Data Deduplication’s post-processing model. We recommend that you always enable Data Deduplication on these workloads:
    • General purpose file servers (GPFS) serving shares such as team shares, user home folders, work folders, and software development shares.
    • Virtualized desktop infrastructure (VDI) servers.
    • Virtualized backup applications, such as Microsoft Data Protection Manager (DPM).
  • Workloads that might benefit from deduplication, but aren’t always good candidates for deduplication. For example, the following workloads could work well with deduplication, but you should evaluate the benefits of deduplication first:
    • General purpose Hyper-V hosts
    • SQL servers
    • Line-of-business (LOB) servers
Before enabling the Data Deduplication we can first check and see if there any savings are by doing this.

Run this in a Command or powershell command where e:\data is or data location that we are using for the dedupe

C:\Windows\System32\DDPEval.exe e:\data

image

Even with a few files there is a saving.

get-volume -DriveLetter e

image

To enable the dedupe go to server manager , volumes and select the disk that need to be enabled.

image

Selecting the volume that needs Dedupe other volumes won’t be affected. It’s important to note that you can’t run data deduplication on boot or system volumes

imageimageimage

The setting of the # days can be changed in to something what suite you.

image

When enabling Deduplication, you need to set a schedule, and you can see above that you can set two different time periods, the weekdays and weekends and you can also enable background optimization to run during quieter periods, and for the rest it is all powershell there is no gui on this.

Get-Command -Module Deduplication will list all the powershell commands

image

Measure-DedupFileMetadata -Path e:\data

image

I places some of the same ISO files on the volume and as you can see there is a storage saving.

get get the data run an update on the dedupe status.

Update-DedupStatus -Volume e:

image

image

It is all easy to use and to maintain. If you have any cluster questions just go to https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverClustering and I’m happy to help you there and also other community or microsoft guys are there.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Robert Smit MVP Linkedin profile

Google  : Robert Smit MVP profile

Bing  : Find me on Bing Robert Smit

LMGTFY : Find me on google Robert Smit

Advertisements

Posted February 21, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with

Part2 Ultimate Step to Remote Desktop Services HTML5 QuickStart Deployment #RDS #VDI #RDP #RDmi   Leave a comment

Ready for Part 2 of the RDS setup.  As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

Then I did the Part 1  Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi https://robertsmit.wordpress.com/2018/01/15/part1-ultimate-s…s2d-vdi-rdp-rdmi/

Where I decided I do a blog on how to build my perfect RDS environment and yes it always depends but some components are just there to use in Azure. I did cover all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

As Microsoft Ignite is behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there (see Channel 9 https://channel9.msdn.com/Shows/OEMTV/OEMTV1760 ). Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. and usage of the HTML5 client. On your own build you can have those benefits also.

Building the RDS on Premise is not multi domain It all needs to be in one domain.  But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

The basic Remote Desktop Services with HTML5 I build is below. in Part 1

image_thumb6

When you don’t have the right performance in your RDS host and you are running this in Azure like me you can always change the RDS host size. Currently I use all BxMs machines Good for making Blog posts and save some costs. and running this with minimal load it performs well.

image

We have the RDS farm in place and we added the HTML5 client – the Bits are for preview users only there for there is not a dive deep yet on the installation.

But the HTML5 client is the same as on the Remote desktop services modern infrastructure the only difference is that you are using your own RDS setup just the way you always did in server 2016 (see part1)

HTML5

Now that the RDS site is up and running, we can take a look at the new HTML5 client. Running this combined with the default RDS page makes it easy to test.

The usage is a bit different but I must say it is fast and instead of multiple windows open it all opens in just one tab with sub icons. in the browser.

image_thumb[3]

As you can see a lot of sub icons in the bar but there is only one tab open. In this case there is more offloading to the RDS host. With using less local compute power.

Remote Desktop Services HTML5

So you can use less heavy clients and work faster & better

Remote Desktop Services HTML5

All the Explorers are combined to one single icon. (Everything is running in the back ground)

Remote Desktop Services HTML5

All the applications that started more than once are combined in the upper bar

So Connection is made on just the same method.

image_thumb

 

imageimage

the web client is added to the RDS site and if you want to make this page default you can easy change this.

image

In the HTTP redirect use the webclient.

Remote Desktop Services HTML5

A nice option is that publishing the RDP client it opens also in the Tab and Checking the Memory usage.

image_thumb[22]image_thumb[23]

It is less than expected, this is on the client. and still We have some applications open.

Remote Desktop Services HTML5

On the back ground (RDS server) you can see all the processes are there. And running the 32 bit Internet explorer eating memory.

image_thumb[25] image_thumb[26]

Above the task manager of the RDS host the first is the HTML5 usage and the second is the default RDS usage.

below all the icons on the taskbar instead of one browser tab.

Remote Desktop Services HTML5

See the load on the local machine based on the above workload.

 

image_thumb[30]

That is all for now In the next part I’ll show you more on deployment and the RD modern Infrastructure.

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 17, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Part1 Ultimate Step to Remote Desktop Services HTML5 on Azure QuickStart Deployment #RDS #S2D #VDI #RDP #RDmi   Leave a comment

As I did already an step by Step Step by Step Server 2016 Remote Desktop Services QuickStart Deployment  #RDS #VDI #RDP #RemoteApp

https://robertsmit.wordpress.com/2015/06/23/step-by-step-server-2016-remote-desktop-services-quickstart-deployment-rds-vdi-rdp-remoteapp/

that’s Covering all the basics but currently there are so many options that I thought it is time to build a new reference guide for RDS. Remember this is my opinion. The good or bad this works and yes you can combine all the roles en split them in use the GUI version and use the other product as well.

I started this post a while ago and thinking about the best configuration but every time there is a little thing well maybe this isn’t the best. With that in mind I started to this blog post at least 6 times. And the best configuration is always “it depends” there are so many options and it is hard to say one size fits all it never is.

As Microsoft Ignite is just behind us, and as expected the New RDmi (RDS modern infrastructure) is almost there. Totally new design If you are using the Azure Components. But this is more like a RemoteApp replacement but what about on premise ? you can build some interesting configurations. The Hybrid model of the RDS farm with the Azure File Sync option. I see great possibility’s  is some configurations. Building the RDS on Premise is not multi domain It all needs to be in one domain.

RDmi (RDS modern infrastructure)

But should you wait if you want RDS ? well then you could wait for ever as there is always new exiting technology around the corner.

Just start with RDS and learn and yes maybe next year your design is obsolete but it will still work. So for now I want to touch the Current RDS build as I see on my old blog post a lot of you are building RDS on premise but also in azure. To build to max scalable Solution you will need to separate all roles. 

But in this case I want to use the option to build a feature reference for RDS and yes this can also be a RS3 or above release(that’s core anyway). I use core Server where I can and after the traffic manager there is no firewall but it would make sense that you use one of your choice. Do use NSG’s for the public networks and or IP’s ! https://robertsmit.wordpress.com/2017/09/11/step-by-step-azure-network-security-groups-nsg-security-center-azure-nsg-network/

But if you can make use of the Azure Security Center and point the Webroles to the Azure Application Proxy.

RDmi (RDS modern infrastructure)

As there is no default firewall I used a AAD application Proxy to access the Remote desktop Gateway website.

RDmi (RDS modern infrastructure) 

The Configuration is not that hard and well documented on the Microsoft Doc site : https://docs.microsoft.com/en-us/azure/active-directory/application-proxy-publish-remote-desktop

In this Blog Item I want to place the RDS basics to the next level, as everybody can install a next next finish installation, but this should be a next step. There is no need for slow performance with the right configuration.

I’m using Core only except for the Session hosts or on servers that its not handy. Separated the Web Roles ,Gateway & Connection Brokers and all is High available. And in front a Traffic manager that determine what Webserver is near you. But this is “only needed” if you use multiple regions or want to separate the traffic.. The Use Profile disk will be hosted with a 3 node Storage Space Direct Cluster As I think a 3th or even a 4th node will give you more benefit of the Storage and Uptime. But this is also a “depends” you can even use a Single file server (non redundant) in this case the UPD are redundant and say I want 3 TB disk space for the UPD.  I did some performance testing and the results are below.

RDmi (RDS modern infrastructure)

with the Premium disk we got a good amount of performance.  As I’m using SMB3 storage I will also add a second nic to all my RDS hosts  for the SMB3 storage. This will take some extra steps to get the right performance.

As you could also go for a single file server with a lot off disk, It saves money as there is only one server and onetime the disks, but there is no redundancy for the UPD. But in this case the backup is easier. If you can handle the downtime and make the UPD that way that it is less important. then this is a nice option.

If you build this in Azure you must be aware that even Azure is not AlwaysOn. Therefor we need to make sure the RDS site is always Up. And again this seems to be a lot of servers and maybe you don’t want all this and want to have only one frontend server and one RD Session host it is all up to you but I think the Holy Grail is between this and a Single server.

In this case I use Powershell for the deployment.  And I deploy all VM’s from a template that way I know that All VM’s are the same in this configuration.

image

First I setup Traffic Manager this is an easy setup and based on performance. I deployed all the VM’s in azure with a Powershell script.

As all new machines are added to the server manager we can use the to add to the farm.

RDmi (RDS modern infrastructure)

When adding the machines just do one gateway and one Connection broker. then configure the RD connection broker HA Database

image

For the Connection broker Database I use a Database as an Service in Azure.

image

Just create the Database and use the connection string in the RDS farm

RDmi (RDS modern infrastructure)

On the Connection brokers you will need the Native SQL client.

https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server

https://www.microsoft.com/en-us/download/details.aspx?id=50402

Now that the Database is Connected we can add all the other servers and add the certificate.

RDmi (RDS modern infrastructure)

The Used String looks like :

Driver={ODBC Driver 13 for SQL Server};Server=tcp:mvpserver.database.windows.net,1433;Database=rdsbd01;Uid=admin@mvpserver;Pwd={your_password_here};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;

image

The SQL native client is required for the connection on all Connection brokers! 

image

Now that the connection High available mode is configured we can add another connection broker

image

 

image

Now that the connection broker is redundant we start adding some web servers

First we are adding the Web role to the new core webservers.

RDmi (RDS modern infrastructure)

Adding the Servers could take some time. Just as the Webserver we add extra connection brokers and Gateway servers. Same method.

RDmi (RDS modern infrastructure)

Even if the servers don’t need a reboot I reboot them anyway just to make sure my config is working.

RDmi (RDS modern infrastructure)

The Same we do with the Gateway role and the Connection broker. Now that all roles are added we can do some configuration.

As we already placed the RDS Database to a Azure we need to apply the Certificate to all the servers in the Farm (Web/gateway/RDCB)

RDmi (RDS modern infrastructure)

In this Configuration I use a Azure load balancing option this is Free and easy to use. I will use 3 Azure Load balancing configurations in this.

Two internal and one Public. The public gets an external IP.

image

The important setting here is the Load balancer type Public or internal

Azure Load Balancer can be configured to:

  • Load balance incoming Internet traffic to virtual machines. This configuration is known as Internet-facing load balancing.
  • Load balance traffic between virtual machines in a virtual network, between virtual machines in cloud services, or between on-premises computers and virtual machines in a cross-premises virtual network. This configuration is known as internal load balancing.
  • Forward external traffic to a specific virtual machine.

All resources in the cloud need a public IP address to be reachable from the Internet. The cloud infrastructure in Azure uses non-routable IP addresses for its resources. Azure uses network address translation (NAT) with public IP addresses to communicate to the Internet.

Building the VM’s We keep them in the same availability set as described below.

image

Update Domains

For a given availability set, five non-user-configurable update domains are assigned by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on.

Fault Domain

Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.

When creating the availability groups we are using the Managed disks and we always can change the VM Size and or disk type. That is the flexible use of Azure.

 image

If your VM(s) are deployed using the Resource Manager (ARM) deployment model and you need to change to a size which requires different hardware then you can resize VMs by first stopping your VM, selecting a new VM size and then restarting the VM. If the VM you wish to resize is part of an availability set, then you must stop all VMs in the availability set before changing the size of any VM in the availability set. The reason all VMs in the availability set must be stopped before performing the resize operation to a size that requires different hardware is that all running VMs in the availability set must be using the same physical hardware cluster. Therefore, if a change of physical hardware cluster is required to change the VM size then all VMs must be first stopped and then restarted one-by-one to a different physical hardware clusters.

image

As changing the disk type to premium we can also adjust the disk size to get more local IOPS. But the cost will get up !!

Simple and scalable VM deployment
Managed Disks handles storage for you behind the scenes. Previously, you had to create storage accounts to hold the disks (VHD files) for your Azure VMs. When scaling up, you had to make sure you created additional storage accounts so you didn’t exceed the IOPS limit for storage with any of your disks. With Managed Disks handling storage, you are no longer limited by the storage account limits (such as 20,000 IOPS / account). You also no longer have to copy your custom images (VHD files) to multiple storage accounts. You can manage them in a central location – one storage account per Azure region – and use them to create hundreds of VMs in a subscription.

Now that we have several RDS host deployed we can add them to the Farm.

Adding RDS host. Is just the seam as adding the Gateway servers or Connection brokers.

image

Now that the basics are installed We can do some configuring.

Building the UPD share you can use the blog post for Storage spaces with the SOFS  : https://robertsmit.wordpress.com/2015/05/12/windows-server-2016-with-storage-spaces-direct-building-sofs-with-storage-spaces-direct-winserv-win2016-s2d-howtopics/

But keep in mind that there is no one size fits all. Calculate how big your storage must be and do not size the total on your top users but on average usage.

Azure VM sizing is also not just pick one, a lot off new sizes are there and pick the one that you need. High performance or memory optimized does not mean you can only use that VM for that role. checkout the specs and test your vm. I think the B Sizes are promising and cheap for a lot off roles.

Check this site for your Azure VM https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

If you want a regular share then use the file server or just share a folder and use this in the RDS. but remember users are reading and writing to this share it will use bandwidth and IOPS on the disk.

Setting the File share can only be done once per RDS collection. As shown below. Create a collection and user your share for the User profile disk to land.

image

 

image

If you want to change the UPD size it can only be done in PowerShell . Also the file share Setting and changing the URL of the Gateway can only be done with powershell after the first config.

Set-RDSessionCollectionConfiguration -CollectionName Collection -MaxUserProfileDiskSizeGB 40

image

Now that everything is in place we launch the RDS webpage. As I modified the page. Just make a modified page and save it somewhere and after a new deployment Copy past it in the C:\Windows\Web\RDWeb.

image

So the page can be with or with out “ public computer”

imageimage

image

Now that the Gateway ,Connection Broker and the RDS hosts are in place we can open the web frontend. As mentioned above I customized the page a bit. (save your modifications on a save place for the next deployment)

That’s all for part 1

In the next part I’m showing you a quick overview of the HTML5 client.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 15, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with , ,

Check with Powershell for Meltdown and Spectre #exploit critical vulnerabilities Protection #Meltdown #Spectre #KB4056892   1 comment

Meltdown and Spectre exploit critical vulnerabilities in modern processors. These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.

Edit:5-1-2018

Meltdown is Intel-only and takes advantage of a privilege escalation flaw allowing kernel memory access from user space, meaning any secret a computer is protecting (even in the kernel) is available to any user able to execute code on the system.

Spectre applies to Intel, ARM, and AMD processors and works by tricking processors into executing instructions they should not have been able to, granting access to sensitive information in other applications’ memory space.

Meltdown work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.

image

Microsoft is aware of a new publicly disclosed class of vulnerabilities referred to as “speculative execution side-channel attacks” that affects many modern processors and operating systems including Intel, AMD, and ARM. Note: this issue will affect other systems such as Android, Chrome, iOS, MacOS, so we advise customers to seek out guidance from those vendors.

Microsoft has released several updates to help mitigate these vulnerabilities. We have also taken action to secure our cloud services. See the following sections for more details.

Microsoft has not received any information to indicate that these vulnerabilities have been used to attack customers at this time. Microsoft continues to work closely with industry partners including chip makers, hardware OEMs, and app vendors to protect customers. To get all available protections, hardware/firmware and software updates are required. This includes microcode from device OEMs and in some cases updates to AV software as well.

The following sections will help you identify and mitigate client environments affected by the vulnerabilities identified in Microsoft Security Advisory ADV180002.

The Windows updates will also provide Internet Explorer and Edge mitigations. We will also continue to improve these mitigations against this class of vulnerabilities.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

Install the powershell module from the Gallery.

image

Install-Module SpeculationControl

image

With  Get-SpeculationControlSettings you can check your settings

image

As my system is not protected, but after all the fixes it should be like this below.

image

But you need to do more than just a software patch.

Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.

checking the BIOS of you machine with

get-wmiobject win32_bios

image

image

As there is no later Bios from my system, I’m out off luck.  good moment to renew my test machine.

SO I need to patch my system, As I’m a windows insider I run several versions of windows. First check there was KB4056890 but this is already updated to KB4056892 make sure you get the latest version of the patch. you don’t want to patch and reboot the machine twice.

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056890

image

The Updated version!

Get the hotfix http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4056892

 

In this case I installed the KB4056890 Update installation may stop at 99% and may show elevated CPU there is a fix for that read this :

https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892

 

image

You need a reboot for this fix.

image

Remember this is not just a Microsoft Windows thing if you are on Citrix,Xenserver,Amazon or VMWare You need to check your hardware.

https://blogs.vmware.com/security/2018/01/vmsa-2018-0002.html

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted January 4, 2018 by Robert Smit [MVP] in Windows Server 2016

Tagged with

#Azure Storage Spaces direct #S2D Standard Storage vs Premium Storage   Leave a comment

I see this often in the Forums Should I use Standard Storage or should I use Premium storage. Well it Depends Premium cost more that Standard but even that depends in the basic. Can a $ 4000 Azure Storage space configuration  out perform a $ 1700 Premium configuration. this blog post is not on how to configure Storage spaces but more an overview on concepts, did I pick the right machine or did I build the right configuration well it all depends.

I love the HPC vm sizes https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-hpc but also expensive.

So in these setups I did create a storage space direct configuration all almost basic. but Key is here pick the Right VM for the job.

Standard 6 node cluster 4 core 8GB memory total disks 96 Type S30 (1TB) RAW disk space 96TB  and 32TB for the vDisk

Premium 3 node Cluster 2 core 16GB memory Total disks 9 Type P30 (1TB) RAW disk space 9TB  and 3TB for the vDisk

Standard A8 (RDMA) 5 node cluster 8 core 56GB memory total disks 80 Type p20 (500GB) RAW disk space 40TB

So basically comparing both configs makes no sense Couse  both configs are different. bigger machines vs little VM

and a lot less storage.

Standard Storage storage vs Premium

The performance of standard disks varies with the VM size to which the disk is attached, not to the size of the disk.

image

So the nodes have 16 disk each 16 * 500 IOPS  and with a max bandwidth of 480 Mbps. that could be a issue as would I use the full GB network than I need atleast  125 MB/s

image

In the Premium it is all great building the same config as in the standard the cost would be $3300 vs $12000. If you have a solution and you need the specifications then this is the way to go.

Can I out perform the configuration with standard disks ? In an old blog post I did the performance test on a 5 node A8 machine and 16 premium storage P20- 500GB 40TB RAW and got a network throughput of 4.2Gbps 

image

https://robertsmit.wordpress.com/2016/01/05/using-windows-storage-spaces-direct-with-hyper-converged-in-microsoft-azure-with-windows-server-2016/

Measurements are different on different machines and basically there is no one size fits all it all depends on the workload or config or needs.

using the script from (by Mikael Nystrom, Microsoft MVP) on the basic disk not very impressive list  high latency but that’s the Standard storage.

imageimage

The premium Storage is way faster and constant. So when using Azure and you need an amount of load or VM’s there is so much choice if you pick a different machine the results can be better. when hitting the IOPS ceiling of the VM. Prepare some calculations when building your new solution.  Test some configurations first before you go in production.

Azure is changing everyday today this may be the best solution but outdated tomorrow.

Below are some useful links on the Machine type and storage type.

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/acu

 https://docs.microsoft.com/en-us/azure/virtual-machines/windows/standard-storage

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-memory#ds-series

 

Thanks for reading my blog. Did you check my other blog post about Azure File Sync : https://robertsmit.wordpress.com/2017/09/28/step-by-step-azure-file-sync-on-premises-file-servers-to-azure-files-storage-sync-service-afs-cloud-msignite/

 

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted November 9, 2017 by Robert Smit [MVP] in Windows Cluster, Windows Server 2016

Tagged with

#ProjectHonolulu the new future of Windows Server GUI management #servermgmt #SMT #winserv   2 comments

As Azure Server management tools discontinued the SMT preview service in Azure on June 30, 2017 and we where stuck to Windows Server management such as Remote Desktop, Server Manager, Remote Server Administration Tools (RSAT), and other MMC-based management tools. See my old blog post about this : https://robertsmit.wordpress.com/2016/02/12/azure-server-management-tools-offers-a-set-of-web-gui-tools-to-manage-azurestack-servers-rsmt-asmt/

But Microsoft created a fresh new tool to manage all our servers, Project “Honolulu” is the next step in our journey to deliver on our vision for Windows Server graphical management experiences.

Looking at the interface it is great, real-time graphs, single point of management. Loading of some components can take some time(Seconds). But it runs not in the IE 11 version. So if you run this on a management server you will need  Google chrome . I had the chance to work with Microsoft during the last couple of months in the Alpha versions. there is a lot of improvement done. There are some options disappeared in the Project ‘Honolulu’ (Technical Preview) and there is a huge whish list and probably when you test the tool you think he this would be nice also.  Then go to the Uservoice page and create or vote for your item.  There are a lot of items in UserVoice with some of the more popular requests from Private Preview so vote for you item and make Project “Honolulu” a piece of your self   https://aka.ms/HonoluluFeedback

Below is a overview of the standard tool set that Project “Honolulu” is offering.

image

And there is also a light foot print on memory

image

So what does it take to run this a huge server ? no just a quick install and you are ready to go. it runs with a self signed certificate if you don’t have a public one.

https://robertsmit.wordpress.com/2016/02/12/azure-server-management-tools-offers-a-set-of-web-gui-tools-to-manage-azurestack-servers-rsmt-asmt/

imageimage

imageimage 

As you can see the Installation is quick and simple easy to setup. Just pick a port number for the website and a Certificate. If you doesn’t have one there will be a self signed cert created.  It does say 60 Days and you can look this up in the local computer Certificate store

image

image

After the Installation you can open the icon or open a Chrome session to server name and the port number. eh wait what was the port number again ?

The port number is stored in the register 

image

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ServerManagementGateway]
"SmePort"="51358"
"UseHttps"="1"

Opening the Wrong Browser :

image

After starting Honolulu in the right browser there is a quick tour. But as always who does this. just skip the tour brave IT person.

image

In case the tool hangs or is not responding just restart the service.

image

imageimage

So after opening we all want to see the nice dashboards and overview. Well you need to add the machines first and that is a lot of work.

No AD select all all typing and fill in the Credentials Luckily there is also a Import.

 

image

 

  image

And the best part is it is just a Text file TXT fill in the names comma or line separated and you are good to go.

Wait for the credentials as you are doing this with the last server and check the box us this for all servers.

imageConfigure-SMremoting.exe -enable

Running this on Server 2012R2 you will need WMF 5 or Windows Management Framework 5.1 Preview

Windows Management Framework 5.1 includes updates to Windows PowerShell, Windows PowerShell Desired State Configuration (DSC), Windows Remote Management (WinRM), Windows Management Instrumentation (WMI). Release notes: https://msdn.microsoft.com/en-us/powershell/wmf/5.1/release-notes

https://www.microsoft.com/en-us/download/details.aspx?id=53347

But running a Quick Cluster in Azure does not bring me the nice dashboard yet. 

image

Well In a few days I have this in an environment where the dashboards are showing but for now I used the screenshot from Msignite

There are two sessions on ignite about Honolulu

image

 

Don’t forget your Feedback on Uservoice  https://aka.ms/HonoluluFeedback

More info : https://blogs.technet.microsoft.com/servermanagement/2017/09/21/video-series-an-inside-look-at-project-honolulu/

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted September 25, 2017 by Robert Smit [MVP] in Windows Server 2016

Tagged with

Xenapp Essentials the replacement of Azure Remote App ? or #NoGo #ARA #Citrix #CXE #Cloud #RemoteApp   1 comment

Well it is here it took some time but now you can start testing with the Xenapp Essentials. Is it any good and Can I use it for production. Well I was a little disappointed  I was charged upfront and the VM image I used was not usable because the Xenapp Essentials can’t handle Azure managed disks, As azure is pushing use managed disk. is Citrix Xenapp Essentials not capable of using managed disk. therefore I had to rebuild a new Image. The look and feel is the same as in Azure RemoteApp the nice thing is you can change sizing and scaling and to save money a time schedule. But for testing in a MSDN subscription I hate the upfront billing and Citrix did not tell this.

But why not build a RDS farm in Azure ? will show this in a the post below and using a Profile Cluster in Azure is also supported.

https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment

For those who are unfamiliar with Azure Remote App check my blog post below.

https://robertsmit.wordpress.com/2014/06/20/microsoft-azure-hybrid-deployment-of-remoteapp-step-by-step-azure-microsoft-remoteapp-mvpbuzz-rds-hrdaas/

In this part I show you how to set things up. there are multiple ways and each has is own choices. Citrix is delivering a default Image and this is a Windows 2012 Image, well I’m not going for a default image but a custom. this need some work. This will be a log blog post and tons of pictures in it, As I tried to do step by step but some items you just need to know in Azure. Else it is gonna be a real long blog. But If you need more info on any item just ping me.

Well first I thought lets do this and writ a quick blog on hoe great this is. The amount off steps it took to get thing running is more than I expected. but it is not a bad thing. But be prepared it takes time!

The interesting part is should I use the Same Image or is there an easy migration path. Well it all depends as most things in IT.

The Deployment Xenapp Essentials workflow in just 7 tiles you are done. but some tiles takes several other little steps.

localized image

Do you want to stay on Windows server 2012R2 ? Well I don’t think so but there are good reasons to migrate as is but will this work. As this blog post is just on how to setup the Citrix Xenapp Essentials, the next post would be this integration and migration

As the Citrix Xenapp Essentials is in the Azure market place we also need a Citrix Account.

You can easily create a new Citrix Cloud by going to the following site: https://onboarding.cloud.com

there are a couple of questions and then you are ready to use the account.

imageimage

In case you have an issue with your account just open a support ticket and the Citrix Support will fix your issue quickly.

So In the Azure portal you can add the Citrix to the menu and go from there.

image

image

You can only manage from here and not add any this, so go to the Azure Marketplace (click NEW or  +)  do a Citrix search.

image

Select the Citrix XenApp Essentials

image

Do Create. and pick a name for the resource and use or create a resource group.

image

Give it a name and create or use an existing Resource Group.

image

As things are default you can change it and read the Text. Default it creates 25 users  Cost Estimate : $456.25 per month

Well for my demo I don’t need 25 users In need just 1.

image

Oh the minimum usage is 25 Ok then I need 25 users.

Pricing

$12.00 per user per month for XenApp Essentials Service, including Citrix NetScaler Gateway Service for secure access and 1 GB data transfer per user per month.

Users added today will be charged at the a prorated rate of $11.60 for the remainder of the current month. This amount will be charged immediately.

$6.25 per user per month for Microsoft Remote Access fee to use XenApp Essentials Service without purchasing a separate RDS CAL for this workload. Contact your Microsoft representative to bring your own RDS CAL.

Users added today will be charged at the a prorated rate of $6.04 for the remainder of the current month. This amount will be charged immediately.

You can purchase additional 25 GB Data Transfer Add-on. The cost is $12.00 per add-on per month

When you add users and data transfer add-on to the service, the new charges apply immediately. You can change the number of users and data transfer add-on each month. Your subscription renews automatically at the end of each month unless canceled.

image

Well the deployment took 6 seconds. that is the Place holder and not the VM’s self An order may take up to 4 hours to provision your service.

image

Shown from the Azure Portal

image

Visit Citrix Cloud to simplify the provisioning, on-going management and monitoring of Windows apps hosted on Azure. Here in the Azure portal, purchase additional seats and data transfer add-ons on-demand to meet the needs of a dynamic workforce.

Manage through Citrix Cloud

An order may take up to 4 hours to provision your service, and you will receive an email from the Citrix Cloud when your service is ready. If you do not receive an email within this time, please contact Citrix Support

Log into the XA Essentials Portal https://essentials.apps.cloud.com/

image

If you need more users you can add them in Azure.

Log into the XA Essentials Portal https://essentials.apps.cloud.com/

imageimage

image

An order may take up to 4 hours to provision your service, and you will receive an email from the Citrix Cloud when your service is ready. If you do not receive an email within this time, please contact Citrix Support

In almost 4 hours I got the email  image

image

Your Citrix product has been shipped via electronic delivery on April 01, 2017, to the email specified on your
purchase order.
Your Citrix order is completely fulfilled. All items on your purchase order have been shipped to the requested
address.

image 

Depending on your other Citrix product you choose the Xenapp Service.

image

There are 3 steps needed Linking the Subscription and upload a master Image and last create your catalog.

 

image image

The Microsoft login dialog box us prompting for credentials . You mus use an account that has admin privileges to your Azure Subscription.

Remember : If your user account is not working. the Account MUST be an Azure AD Account.

image

image

Next step is creating the XA Essentials Catalog. In these steps the Image will be mounted ,AD connections ,Network,Applications.

A important step with full of options. To setup XA Essentials you need:

  • Azure Subscription
  • Resource Group’s for Cloud connector,Images,etc but you can also use just one Resource Group
  • Domain Controller with Active Directory Domain Services and DNS
  • Virtual Network configured for domain usage
  • A Subnet with free IP addresses

Click Create Catalog.

image    image

Select the Network and the Resource group

image 

As I need some extra resources for creating Image I’ll use Extra Storage accounts

Image Requirements

Use the following requirements to create a custom image:

  • Create the image by using Azure Resource Manager.
  • Configure the image to use standard (not premium) storage.
  • Select Windows Server 2012 R2 or later.
  • Install and configure your apps
  • Install the Server OS VDA. You can download the VDA by using the Downloads link on the navigation bar.
  • Shut down the virtual machine and note the VHD location. Do not Sysprep the image.

And DON’T use Managed Storage accounts for a Custom Image in Xenapp Essentials Can’t use this in the Citrix Images #Fail.

and a good thing there is a brake on my Azure credits. Not for the blog. Seems Citrix is charging upfront. another Failure #Fail but this is only on my MSDN subscription. at this point I can’t finish my blog post #GRRRRR

image

So but the nice thing are picking my machine type like an G5 just for fun or needed.

image

Using a default VM as a D2 and for a default of 25 users. Think again and see your Perf resources right now. the cost will be at least the double for 25 users.

image

image

Scale settings For my Current Costumers we had a custom script in place for Automatic Scaling of Remote Desktop Session Hosts in Azure Virtual Machines

https://gallery.technet.microsoft.com/scriptcenter/Automatic-Scaling-of-9b4f5e76/view/Discussions

but in Citrix it is all there Currently it is maximized to 200 users but If I build more collections I can scale up even in the test environment

yes there are flaws in it but it is a replacement for ARA. second building this could take up some time but as you already paid for a month that renewing every month! 

the pricing is as described on the citrix site

xa-essentials-faq

Requires 25 user minimum. Includes NetScaler Gateway Service with 1 GB Data transfer per user per month.  Additional NetScaler Gateway Service 25 GB Data transfer Add-on available for $12 per pack per month
Available from Azure Marketplace when purchasing XenApp Essentials. Please consult your Microsoft representative to bring your own RDS CAL.

 

So there are now a couple options build a RDS farm in Azure good for large Company’s  or who are in need of more flexibility or using Citrix XenApp essentials and will microsoft come with an replacement for Azure remote App, when checking my blog post I see a huge hit on Azure RDS. the Citrix solution isn’t that cheap and has a minimum of 25 users. but building it you self it could be done on 1 server but the price will be more than $486 but who wants to run >2 users on a D2. when using SaaS applications or other Webbase stuff the 3 Gb memory and 14% or more CPU usage is not uncommon

image

Cheap No use full yes and an Azure Remote App Replacement yes Perfect absolutely NOT

In my next post I will do a dive deep into some configuration issues. see this like Azure RDS vs CXE.

 

Follow Me on Twitter @ClusterMVP

Follow My blog https://robertsmit.wordpress.com

Linkedin Profile Http://nl.linkedin.com/in/robertsmit

Google Me : https://www.google.nl

Bing Me : http://tinyurl.com/j6ny39w

LMGTFY : http://lmgtfy.com/?q=robert+smit+mvp+blog

Posted April 4, 2017 by Robert Smit [MVP] in Windows Server 2016, Xenapp Essentials

Tagged with

  • Twitter

  • Advertisements
    %d bloggers like this: