In a recent redesign of a PKI infrastructure, I engaged Microsoft to help implement some best practices as the previous PKI design had been setup by the “guy who knows the most about certificates” about a decade ago.

As part of this process, the PFE stated that the use of the certsrv web page is being deprecated within Microsoft in favor of command line and MMC functionality. With that in mind, I made it a point to publish templates that were only absolutely necessary and focus on the site being an easy point to download the chain and CRL and that’s about it. It was funny how quickly I realized I used that webpage way more than I thought I did.

To keep me from having to constantly refer to Technet or keep using certreq /? all the time, I put together this quick PowerShell script to help automate the process. I also added a little Windows Forms integration so that I could allow some of the application teams to request their own certs instead of constantly requesting new ones for testing, etc.

This isn’t groundbreaking or anything and it isn’t the first script with this functionality, but it saves me a bit of time :).

#requires -Version 3.0

function Get-CertificateRequestFile {
  param (
    [string]$InitialDirectory = $PSScriptRoot
  )
  [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | Out-Null
  $ShowDialog = New-Object System.Windows.Forms.OpenFileDialog
  $ShowDialog.InitialDirectory = $InitialDirectory
  $ShowDialog.Filter = "CSR File (*.csr)|*.csr|Request File (*.req)|*.req|Text File (*.txt)|*.txt|All Files (*.*)|*.*"
  $ShowDialog.ShowDialog() | Out-Null
  return $ShowDialog.FileName
}


function Get-CertificateTemplates {
  $script:IssuingCA = certutil -config - -ping
  $script:IssuingCA = $script:IssuingCA | Where-Object { ($_ -match '\\') -and ($_ -notmatch 'Connecting')}
  $TemplateList = certutil -CATemplates -config $script:IssuingCA
  return $TemplateList
}

$script:IssuingCA = ""
$TemplateItems = @{}
$i = 0
$RequestFile = Get-CertificateRequestFile
$Templates = Get-CertificateTemplates

foreach ($Template in $Templates) {
  if ($Template.Contains("--")) { 
    $CurrentItem = $Template -split ' -- '
    $TemplateItems.Add($i,$CurrentItem[0])
    $i++
  }
} 
do { 
  Clear-Host
  Write-Output "`n"
  Write-Output "Selected Certificate Authority: $script:IssuingCA`n"
  $TemplateItems.GetEnumerator() | Sort-Object Name | ForEach-Object {Write-Output (" {0} - {1}" -F $_.Key, $_.Value)}
  $SelectedItem = Read-Host -Prompt "`nSelect the number for the requested template (CTRL+C to quit)"
  if ($SelectedItem -notin @(0..$i)) { 
    $CurrentUIColor = $Host.UI.RawUI.ForegroundColor
    $Host.UI.RawUI.ForegroundColor = 'Yellow'
    Write-Output "Please select a valid number or CTRL+C to quit.." 
    $Host.UI.RawUI.ForegroundColor = $CurrentUIColor
    Start-Sleep -Seconds 2
  }
} while ($SelectedItem -notin @(0..$i))

$results = $TemplateItems.GetEnumerator() | Where-Object { $_.Key -eq $SelectedItem}
$SelectedTemplate = ($($results.Value -split ':')[0]).Trim()

certreq -submit -config $script:IssuingCA -attrib "CertificateTemplate:$SelectedTemplate" $RequestFile

Clear-Variable TemplateItems

So you were probably redirected here and are wondering where is the tool?

While I did have something written in C#, it was kind of a pain to keep updating and seemed to have grown into something overly complicated. So… I decided to re-write it in Powershell. It is a side project but I should have something set to release before too long.

There are definitely other cool versions of something like this you can find, but they seemed to do one or two things and not everything. For example, it would clean excluded files but not excluded directories, or it would work with local UPM settings, but not really integrate with AD policies, etc. I want a tool that I can clean one or all profiles and clean files and directories that are excluded. Therefore, I cracked open ISE and off I went.

Sorry for the inconvenience; hope you find it worth the wait when ready.

Working with some older hardware (HP DL585 G7 and NC523 SFP 10Gb Dual Port Adapters), I ran into an issue with a Hyper-V cluster where the nodes would intermittently crash with the DPC_WATCHDOG_VIOLATION error with a 0x133 error code. The crash was guaranteed to be repeated if I manually initiated a Live Migration process. This error is essentially caused by a driver exceeding a timeout threshold. You can read more about the watchdog violation here and if you’re feeling really geeky, you can read about DPC objects and driver I/O here.

After analyzing the memory.dmp, the stack pointed to the QLogic driver (dlxgnd64.sys). As I’m sure you would, I proceeded to update the driver for the Intelligent NIC; however, since the server was already a little over 2 years old, the latest version of the HP driver was already installed. Hmm… Next, I went to QLogic directly and looked up their number for the NC523 which they OEM for HP which turned out to be QLE3242. The driver on the QLogic site was more current so I gave that a shot. After updating I tested again with a Live Migration and once again enjoyed the lovely cornflower blue hue of the BSOD. Crap. Back to Google.

After additional digging, I found some errors in the System event log for ID 106 regarding load balanced teaming on the NIC. After a little research, I ran across this article on MS Support. Again, I’ll let you read the details but in a nutshell, the NIC’s in the team were overlapping their usage of the same processors. As I was using hyper-threading, I followed the steps in the article to specify specific processors for each NIC and the max number of processors VMQ could use:

Set-NetAdapterVMQ -Name “Ethernet1” -BaseProcessorNumber 4 -MaxProcessors 8 (VMQ would use processors 4,6,8,10,12,14,16,18)
Set-NetAdapterVMQ -Name “Ethernet2” -BaseProcessorNumber 20 -MaxProcessors 8 (VMQ would use processors 20,22,24,26,28,30,32,34)

This did not require a restart and once I made the changes on the NIC’s, I was able to Live Migrate without any crashes. I will also note that although I updated the drivers, I also tested this without updating on another Hyper-V cluster with identical hardware and the VMQ settings resolved the issue there. I burned about 6 to 8 hours banging my head on various troubleshooting items including several I didn’t include here so I hope this post saves you a bit of time and headache.

After upgrading Storefront from 2.5 to 3.5, I noticed that all published applications where the VDA was running on Windows 2012R2 started displaying the Windows logon process in a splash screen.alt text

The application continued to launch successfully, but this splash screen did not start appearing until after the Storefront upgrade. This also did not occur on VDA’s running on Windows 2008R2, only 2012 servers. The fix was to update the following registry key on the VDA:
Key: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\CitrixLogon
Name: DisableStatus
Type: REG_DWORD
Value: 0x00000000

So as part of a recent upgrade I was performing, I upgraded a couple of Netscaler Access Gateways from version 10.1 to version 10.5. The upgrade went very smoothly, no errors, no user calls… for a while. The next day, we started receiving some calls regarding issues with launching apps via Storefront. Some users were receiving the “SSL Error 43: The proxy denied access to…” error with their STA ticket when clicking on their application icons on the web page.

Tracking down the servers based on their STA ID in the ticket, I noticed that users only had issues when they were attempting to authenticate to Windows 2012 R2 delivery controllers. The Windows 2008 R2 delivery controllers were not denying the STA requests. Jumping on one of the Windows 2012 R2 delivery controllers, I noticed the System event log was flooded with Schannel errors for Event ID 36874 (An TLS 1.2 connection request was received from a remote client application, but none of the cipher suites supported by the client application are supported by the server. The SSL connection request has failed.) and Event ID 36888 (A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 40. The Windows SChannel error state is 1205.). Well, we obviously have an SSL issue, but these codes aren’t exactly pointing me anywhere. Looking up the error code on the RFC page for the TLS protocol (http://tools.ietf.org/html/rfc5246) I found that error code 40 is a handshake failure (you can find this in the A.3 part of the appendix in the Alert Messages section). I can’t remember where exactly I found the enum definition for the Schannel 1205 code, but it basically means that a fatal error was send to the endpoint and the connection was being forcibly terminated. At least I now knew there was an issue with the SSL handshake between the Netscalers and the Windows 2012 R2 delivery controllers. Time for some network tracing.</p> Firing up Wireshark on the delivery controller, I could see that the connection was getting immediately reset by the server after the Client Hello from the Netscaler.

Windows_2012_R2_RST_ACK

Expanding the Client Hello packet in the capture, I could see a list of ciphers currently being offered by the Netscaler. (Note – for the sake of easier troubleshooting, I left the default grouping of ciphers in place as it was a large group of widely accepted ciphers until I identified the issue and then trimmed down the cipher list. You should limit the number of ciphers available on the virtual server of your Access Gateway to just what you need and leverage the more current stronger methods available such as AES 256 over RC4 and MD5, etc. if possible.)

Cipher suites

Next, I configured the SSL Cipher Suite Order on the windows server to match what the Netscaler was presenting in the Client Hello packet, at least the top 10 or so. This can be done using either gpedit.msc for local policy or via the Group Policy Management Console as follows:

  1. In either editor, expand Computer Configuration/Administrative Templates/Network.
  2. Click on SSL Cipher Suite Order in the SSL Configuration Settings
  3. Select the Enabled option and then follow the instructions in the Help section of the policy. Basically, all the ciphers you want will be listed on a single line separated by commas with no spaces anywhere.
  4. You must reboot the server for the changes to take effect.

SSL Cipher Order Policy

Even after the reboot, the SChannel errors were still present and the network captures were still showing the handshake failing due to a reset from the server. I’ll save you the time you will spend on re-ordering the ciphers on both the Netscaler and the Windows Server 2012 R2 Delivery Controller along with the multitude of reboots that go with it; it simply won’t work (at least at the time I published this). I stepped back and decided to try tweaking the TLS protocol versions since I wasn’t getting anywhere with the cipher suites (key exchange algorithms). For the sake of brevity, after much additional testing, headbanging, and googling I was able to get the handshake to work when I disabled TLS 1.2 on the Windows 2012 server. This forced the server to renegotiate using TLS 1.1 with the Netscaler which worked with the cipher suites I tested with that were supported by both the OS and the Netscaler. I did find a nice article supporting this here for additional reference.

To disable TLS 1.2 on the server, you need to modify a registry key:

  1. Go to HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols.

  2. If the TLS 1.2 key does not exists, create it.

  3. Inside the TLS 1.2 **key, create another key called **Client.

  4. Within the Client key, create two REG_DWORD values:

    a. DisabledByDefault (set value to 1).

    b. Enabled (set value to 0).

You will need to reboot one more time for the changes to take effect. This finally cleared up my SChannel errors as well as allowed me to add the controllers back as STA’s in the virtual server; in a green status this time.