Archive for the ‘Windows 2008 R2’ Tag

Cluster Validation Create Cluster access is denied   Leave a comment

My Configuration is a Fresh new Windows 2008 R2 machine Ready to create a 4 node cluster. When I run the Cluster validation Process it was all OK just create my cluster name With a IP and no storage. I did this in Powershell and it failed Why ? access Denied on creating the cluster .

What is wrong ?

Step One : Cluster Validation Reports all fine no major errors here.

Cluster Validation Create Cluster access is denied

Could it be a GPO that meshup my install ? <> NO

Firewall / other local settings <> NO

In powershell “admin mode” I did create a cluster / error creating cluster access denied.

Cluster Validation Create Cluster access is denied

Event logging : there are special eventlogs for clustering So I checked them.

Cluster Validation Create Cluster access is denied

So It looks OK no errors <> Dive Deeper.

Cluster Validation Create Cluster access is denied

The Cluster service successfully formed the failover cluster ‘ClusterName’.

So I can create the Cluster but after that I rolls back Whay ?

Cluster Validation Create Cluster access is denied

The cluster service has been stopped and set as disabled as part of cluster node cleanup.

Cluster Validation Create Cluster access is denied

So Access is denied Event ID 4657 I could use http://www.bing.com but there must be a hint to solve this.

There is also a Event ID 6 kerberos issues ? MaxTokenSize OK checked the register on this server. NO MaxTokenSize there.

Do I need this key ? Well I checked the Key for my domain account that I used for creating this cluster and yes It is member of 700 Groups

Who need this kind of Groups this is Windows 2008 R2 beyond the Kerberos ticket eh yeh but your Domain controller is still a windows 2003 x32 Box. 

So I put in the MaxTokenSize key rebooted and created the cluster. Case solved.

Cluster Validation Create Cluster access is denied

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters]

"MaxTokenSize"=dword:000186a0

Posted September 20, 2010 by Robert Smit [MVP] in Windows 2008 R2

Tagged with ,

Cluster Validation Create Cluster access is denied   1 comment

My Configuration is a Fresh new Windows 2008 R2 machine Ready to create a 4 node cluster. When I run the Cluster validation Process it was all OK just create my cluster name With a IP and no storage. I did this in Powershell and it failed Why ? access Denied on creating the cluster .

What is wrong ?

Step One : Cluster Validation Reports all fine no major errors here.

Cluster Validation Create Cluster access is denied

Could it be a GPO that meshup my install ? <> NO

Firewall / other local settings <> NO

In powershell “admin mode” I did create a cluster / error creating cluster access denied.

Cluster Validation Create Cluster access is denied

Event logging : there are special eventlogs for clustering So I checked them.

Cluster Validation Create Cluster access is denied

So It looks OK no errors <> Dive Deeper.

Cluster Validation Create Cluster access is denied

The Cluster service successfully formed the failover cluster ‘ClusterName’.

So I can create the Cluster but after that I rolls back Whay ?

Cluster Validation Create Cluster access is denied

The cluster service has been stopped and set as disabled as part of cluster node cleanup.

Cluster Validation Create Cluster access is denied

So Access is denied Event ID 4657 I could use http://www.bing.com but there must be a hint to solve this.

There is also a Event ID 6 kerberos issues ? MaxTokenSize OK checked the register on this server. NO MaxTokenSize there.

Do I need this key ? Well I checked the Key for my domain account that I used for creating this cluster and yes It is member of 700 Groups

Who need this kind of Groups this is Windows 2008 R2 beyond the Kerberos ticket eh yeh but your Domain controller is still a windows 2003 x32 Box. 

So I put in the MaxTokenSize key rebooted and created the cluster. Case solved.

Cluster Validation Create Cluster access is denied

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaKerberosParameters]

"MaxTokenSize"=dword:000186a0

Technorati Tags: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Windows Live Tags: Cluster,Validation,Create,Beta,Configuration,Fresh,Windows,machine,Ready,Powershell,Step,Reports,Could,Firewall,mode,Event,Dive,Deeper,service,ClusterName,back,Whay,part,hint,issues,MaxTokenSize,Groups,Kerberos,ticket,Domain,Case,Registry,Editor,Version,HKEY_LOCAL_MACHINE,SYSTEM,CurrentControlSet,Control,Parameters,node,errors

Posted September 20, 2010 by Robert Smit [MVP] in Windows 2008 R2

Tagged with ,

Cluster Shared Volumes change metric configuration with powershell only !   Leave a comment

 

How do I change the metric off my cluster shared volumes network ?

Well If you don’t want to use the autometric then you need to change it but where can I change it.

About CSV :

When you enable Cluster Shared Volumes, the failover cluster automatically chooses the network that appears to be the best for CSV communication. However, you can designate the network by using the cluster network property, Metric. The lowest Metric value designates the network for CSV and internal cluster communication. The second lowest value designates the network for live migration, if live migration is used (you can also designate the network for live migration by using the failover cluster snap-in).

When the cluster sets the Metric value automatically, it uses increments of 100. For networks that do not have a default gateway setting (private networks), it sets the value to 1000 or greater. For networks that have a default gateway setting, it sets the value to 10000 or greater. Therefore, for your preferred CSV network, choose a value lower than 1000, and give it the lowest metric value of all your networks.

 

Another property, called AutoMetric, uses true and false values to track whether Metric is being controlled automatically by the cluster or has been manually defined.

 

To set the Metric value for a network, use the Windows PowerShell cmdlet Get-ClusterNetwork as described in the following procedure. For more information about the cmdlet, see Get-ClusterNetwork (http://go.microsoft.com/fwlink/?LinkId=143787).

To designate a network for Cluster Shared Volumes

  1. On a node in the cluster, click Start, click Administrative Tools, and then click Windows PowerShell Modules. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.)

  2. To identify the networks used by a failover cluster and the properties of each network, type the following:

    Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role

    A table of cluster networks and their properties appears (ft is the alias for the Format-Table cmdlet). For the Role property, 1 represents a private cluster network and 3 represents a mixed cluster network (public plus private).

  3. To change the Metric setting to 900 for the network named Cluster Network 1, type the following:

    ( Get-ClusterNetwork "Cluster Network 1" ).Metric = 900

    noteNote

    The AutoMetric setting changes from True to False after you manually change the Metric setting. This tells you that the cluster is not automatically assigning a Metric setting. If you want the cluster to start automatically assigning the Metric setting again for the network named Cluster Network 1, type the following:

    ( Get-ClusterNetwork "Cluster Network 1" ).AutoMetric = $true

  4. To review the network properties, repeat step 2.

Posted August 31, 2010 by Robert Smit [MVP] in Windows 2008 R2

Tagged with

MaxUserPort – what it is, what it does, when it's important   Leave a comment

 

Recently I run in to the MAXUserPort issue I binged again to see if there are other bloggers that have this fixed.  I saw a good post from Tristan Kingston ( I could not find my own post ;-(

Source :http://blogs.technet.com/b/tristank/archive/2008/03/11/maxuserport-what-it-is-what-it-does-when-it-s-important.aspx

 

MaxUserPort controls "outbound" TCP connections

MaxUserPort is used to limit the number of dynamic ports available to TCP/IP applications.

It’s never going to be an issue affecting inbound connections. MaxUserPort is not the right answer if you think you have an inbound connection problem.

(I don’t know why, I just know it is. Probably something to do with constraining resource use on 16MB machines, or something.)

To further simplify: it’s typically going to limit the number of outbound sockets that can be created. Note: that’s really a big fat generalization, but it’s one that works in 99% of cases.

If an application asks for the next available socket (a socket is a combination of an IP address and a port number), it’ll come from the ephemeral port range allowed by MaxUserPort. Typically, these "next available" sockets are used for outbound connections.

The default range for MaxUserPort is from 1024-5000, but the possible range is up to 65534.

When You Fiddle MaxUserPort

So, why would you change MaxUserPort?

In the web server context (equally applicable to other application servers), you’d usually need to look at MaxUserPort when:

– your server process is communicating with some type of other system (like a back-end database, or any TCP-based application server – quite often http web servers)

And:

– you are not using socket pooling, and/or

– your request model is something like one request = one outbound TCP connection (or more!)

In this type of scenario, you can run out of ephemeral ports (between 1024 and MaxUserPort) very quickly, and the problem will scale with the load applied to the system, particularly if a socket is acquired and abandoned with every request.

When a socket is abandoned, it’ll take two minutes to fall back into the pool.

Discussions about how the design could scale better if it reused sockets rather than pooling tend to be unwelcome when the users are screaming that the app is slow, or hung, or whatever, so at this point, you’d have established that new request threads are hung waiting on an available socket, and just turn up MaxUserPort to 65534.

What Next? TcpTimedWaitDelay, natch

Once MaxUserPort is at 65534, it’s still possible for the rate of port use to exceed the rate at which they’re being returned to the pool! You’ve bought yourself some headroom, though.

So how do you return connections to the pool faster?

Glad you asked: you start tweaking TcpTimedWaitDelay.

By default, a connection can’t be reused for 2 times the Maximum Segment Lifetime (MSL), which works out to 4 minutes, or so the docs claim, but according to The Lore O’ The Group here, we reckon it’s actually just the TcpTimedWaitDelay value, no doubling of anything.

TcpTimedWaitDelay lets you set a value for the Time_Wait timeout manually.

As a quick aside: the value you specify has to take retransmissions into account – a client could still be transferring data from a server when a FIN is sent by the server, and the client then gets TcpTimedWaitDelay seconds to get all the bits it wants. This could be sucky in, for example, a flaky dial-up networking scenario, or, say, New Zealand, if the client needs to retransmit a whole lot… and it’s sloooow. (and this is a global option, as far as I remember).

30 seconds is a nice, round number that either quarters or eighths (depending on who you ask – we say quarter for now) the time before a socket is reusable (without the programmer doing anything special (say, SO_REUSEADDR)).

If you’ve had to do this, at this point, you should be thinking seriously about the architecturewill this scale to whatever load requirements you have?

The maths is straightforward:

If each connection is reusable after a minimum of N (TcpTimedWaitDelay) seconds
and you are creating more than X (MaxUserPort) connections in an N second period…

Your app is going to spend time "waiting" on socket availability…

Which is what techy types call "blocking" or "hanging". Nice*!

Fun* KB Articles:
http://support.microsoft.com/kb/319502/
http://support.microsoft.com/kb/328476

Posted July 23, 2010 by Robert Smit [MVP] in Web Servers, Windows 2008 R2

Tagged with

MaxUserPort – what it is, what it does, when it’s important   5 comments

 

Recently I run in to the MAXUserPort issue I binged again to see if there are other bloggers that have this fixed.  I saw a good post from Tristan Kingston ( I could not find my own post ;-(

Source :http://blogs.technet.com/b/tristank/archive/2008/03/11/maxuserport-what-it-is-what-it-does-when-it-s-important.aspx

 

MaxUserPort controls "outbound" TCP connections

MaxUserPort is used to limit the number of dynamic ports available to TCP/IP applications.

It’s never going to be an issue affecting inbound connections. MaxUserPort is not the right answer if you think you have an inbound connection problem.

(I don’t know why, I just know it is. Probably something to do with constraining resource use on 16MB machines, or something.)

To further simplify: it’s typically going to limit the number of outbound sockets that can be created. Note: that’s really a big fat generalization, but it’s one that works in 99% of cases.

If an application asks for the next available socket (a socket is a combination of an IP address and a port number), it’ll come from the ephemeral port range allowed by MaxUserPort. Typically, these "next available" sockets are used for outbound connections.

The default range for MaxUserPort is from 1024-5000, but the possible range is up to 65534.

When You Fiddle MaxUserPort

So, why would you change MaxUserPort?

In the web server context (equally applicable to other application servers), you’d usually need to look at MaxUserPort when:

– your server process is communicating with some type of other system (like a back-end database, or any TCP-based application server – quite often http web servers)

And:

– you are not using socket pooling, and/or

– your request model is something like one request = one outbound TCP connection (or more!)

In this type of scenario, you can run out of ephemeral ports (between 1024 and MaxUserPort) very quickly, and the problem will scale with the load applied to the system, particularly if a socket is acquired and abandoned with every request.

When a socket is abandoned, it’ll take two minutes to fall back into the pool.

Discussions about how the design could scale better if it reused sockets rather than pooling tend to be unwelcome when the users are screaming that the app is slow, or hung, or whatever, so at this point, you’d have established that new request threads are hung waiting on an available socket, and just turn up MaxUserPort to 65534.

What Next? TcpTimedWaitDelay, natch

Once MaxUserPort is at 65534, it’s still possible for the rate of port use to exceed the rate at which they’re being returned to the pool! You’ve bought yourself some headroom, though.

So how do you return connections to the pool faster?

Glad you asked: you start tweaking TcpTimedWaitDelay.

By default, a connection can’t be reused for 2 times the Maximum Segment Lifetime (MSL), which works out to 4 minutes, or so the docs claim, but according to The Lore O’ The Group here, we reckon it’s actually just the TcpTimedWaitDelay value, no doubling of anything.

TcpTimedWaitDelay lets you set a value for the Time_Wait timeout manually.

As a quick aside: the value you specify has to take retransmissions into account – a client could still be transferring data from a server when a FIN is sent by the server, and the client then gets TcpTimedWaitDelay seconds to get all the bits it wants. This could be sucky in, for example, a flaky dial-up networking scenario, or, say, New Zealand, if the client needs to retransmit a whole lot… and it’s sloooow. (and this is a global option, as far as I remember).

30 seconds is a nice, round number that either quarters or eighths (depending on who you ask – we say quarter for now) the time before a socket is reusable (without the programmer doing anything special (say, SO_REUSEADDR)).

If you’ve had to do this, at this point, you should be thinking seriously about the architecturewill this scale to whatever load requirements you have?

The maths is straightforward:

If each connection is reusable after a minimum of N (TcpTimedWaitDelay) seconds
and you are creating more than X (MaxUserPort) connections in an N second period…

Your app is going to spend time "waiting" on socket availability…

Which is what techy types call "blocking" or "hanging". Nice*!

Fun* KB Articles:
http://support.microsoft.com/kb/319502/
http://support.microsoft.com/kb/328476

Posted July 23, 2010 by Robert Smit [MVP] in Windows 2008 R2

Tagged with

  • Twitter

  • RSS Azure and Microsoft Windows Server Blog

  • %d bloggers like this: