sysadmin and other technological pontifications

Properly Configuring SSL Certificates for Remote Desktop Services

Properly securing Remote Desktop Services with an SSL certificate is a subject that causes frequent confusion among IT Professionals. For the purposes of this article, we’ll be discussing Remote Desktop Deployments on Windows Server 2012/2016. While documentation exists on TechNet and other sources describing the SSL requirements, I have yet to find a comprehensive source that covers all the possible scenarios where the requirements differ from one to the next. The four core questions that most folks get stuck on are the following.


  1. What kind of certificate do I need?
  2. How do I generate the certificate to deploy it across the environment?
  3. What server names need to be covered by the certificate?
  4. What do I do if my internal domain differs from my external domain?


Before we can talk about the three questions above, we need to identify each of the core Remote Desktop Roles that will need securing and what their functions are.


  • RD Web Access: This is an externally facing service that provides the web interface the users will access to login and run their RemoteApps
  • RD Gateway: This is an externally facing service that receives the RDS connections from the internet, checks the connections against a defined set connection policies, and passes them on to the Connection Broker
  • RD Connection Broker: This is an internal service that handles all the session management for incoming RDS connections. In an environment with multiple session hosts, for example, the connection broker is responsible for load balancing the connections evenly across the farm.
  • RD Session Host: These are the workhorses of the environment, and are the servers that the users are logging into, and where the published RemoteApps execute from.


Now that we have defined the services that need to be covered by our SSL certificate, let’s talk about the certificate itself. Since we know that we need to potentially secure multiple server names, we need to purchase either a SAN or Wildcard Certificate. In most cases I would always recommend a Wildcard certificate since it will cover an unlimited number of hosts under a single domain. SAN certificates are certainly a little cheaper, however they have a maximum of 3 hosts, so if your RDS environment expands in the future, you’re going to need to issue a wildcard down the line anyway.

Regardless of which certificate type you choose, it will need to cover all the servers in the environment:


  • The RD Web Access URL you have chosen
  • The RD Gateway URL you have chosen
  • The internal server name of the Connection Broker
  • The internal server names of your session hosts


Initially creating the certificate can be done from any copy of IIS, just like you would any other web server certificate. When the RD Web Access role is deployed, it will add IIS to the server and you can use the Internet Information Server MMC to accomplish this. Ensure that you are generating a certificate with a known authority (Network Solutions, RapidSSL, GoDaddy, etc.) so that your certificate will be trusted.

Once the certificate is installed, it needs to be exported to PFX format so it can be deployed to the other servers in the environment.


  1. Open MMC
  2. Add the Certificates Snap-In for the Local Computer Context (You should find your certificate under Personal\Certificates)
  3. Right-Click on the certificate, All Tasks > Export
  4. Ensure that you choose YES when asked to export the private key
  5. Choose Personal Information Exchange and include all certificates in the certification chain


Once you have the PFX file, deploying the certificate to the environment can be accomplished under the Edit Deployment\Certificates section of the Remote Desktop Services management panel in Server Manager. Simply choose each role service and supply your Exported PFX.

Now that we understand how and where the certificate is generated and deployed, we’ll talk about the two distinct DNS scenarios that will have an impact on your deployment.


Split DNS Scenario

This is the easier of the two possible deployment situations. Consider the following:


  • Externally, domain.com is handled by an external DNS provider
  • Internally, domain.com is handled by your internal AD DNS domain


In this case, all internal server names (specifically the connection broker and session hosts) will resolve to server.domain.com and match your wildcard certificate. No additional action should be required for everything to run smoothly


.local Internal Domain Names

This scenario is where things get a little tricky. Consider the following:


  • Externally, domain.com is handled by an external DNS provider
  • Internally, the local AD DNS domain is domain.local


The Web Access and Gateway roles will not need modification as those only require external DNS entries, but this will present a problem for your internal services. When a user opens a RemoteApp, it will first hit the gateway, but then get internally forwarded to the Connection Broker using the internal hostname. Because these internal hostnames are using .local addresses, your certificate will not match and the connection will fail.

Fortunately, while taking a little bit of effort, this is fairly easily solved.

The first thing we need to do is modify the connection URL of the Connection Broker. While there is no way to do this through the GUI, I have used the following PowerShell Script numerous times for just this purpose.


If your internal connection broker hostname is cb.domain.local, you can issue the following command using the above script to change it to match your certificate.


Set-RDPublishedName "cb.domain.com"


Once that’s done, the next problem is that cb.domain.com will now need to resolve to the local IP address of the Connection Broker (Just like cb.domain.local does). To accomplish this, you need to make your internal AD DNS server authoritative for that particular host.


  1. Open Active Directory DNS
  2. Create a new Primary Zone for the fully qualified name of the connection broker (cb.domain.com)
  3. Create a single A record, leave the name field blank, and enter the local IP address of the Connection Broker


The end result is that, internally, any DNS lookup for cb.domain.com will resolve to the local address of your Connection Broker, while all other DNS requests for domain.com will still be answered by your external DNS provider.

IIS 7.0+ Upload Restrictions and Failures

By default, IIS 7.0+ has restrictions on the maximum file size that can be uploaded to the web server. If these restricted attributes are not modified, you will experience failures when attempting to send larger sized data through the web server. This article discusses the three attributes below, listed with their default values. For the purposes of this article, I am setting all of the values to 1073741824 Bytes, or 1 Gigabyte.


  • maxRequestEntityAllowed (200,000 bytes)
  • Maximum Allowed Content Length (30,000,000 bytes)
  • UploadReadAheadSize (49,152 bytes)



This attribute specifies the maximum number of bytes allowed in the body of an ASP request. If you are using a classic ASP script to upload data, this attribute will be relevant.


  1. Open Internet Information Services (IIS) Manager under Administrative Tools
  2. Highlight the relevant website
  3. Double click ASP from the Features View
  4. Expand the Limits Properties Branch
  5. Locate and change the Maximum Requesting Entity Body Limit to “1073741824”


Maximum Allowed Content Length

This attribute specifies the maximum length of an HTTP request. While the limit is 30MB by default, we should change this value to match the above.


  1. Open Internet Information Services (IIS) Manager under Administrative Tools
  2. Highlight the relevant website
  3. Double click Request Filtering from the features view
  4. Click the Edit Feature Setting link on the right hand side pane
  5. Locate and change the Maximum allowed content length to “1073741824”



You must have both of the following role services for IIS to be able to make this attribute change


  • IIS 6 Management Compatibility
  • IIS 6 Scripting Tools


If SSL is enabled on the website in question, long HTTP requests may cause HTTP 413 errors if the entire request cannot be fit into the SSL Preload.

Issue the following from an administrative command prompt to increase the limit of the SSL Preload:


cscript c:\inetpub\adminscripts\adsutil.vbs set w3svc/1/uploadreadaheadsize 1073741824


After making all of the proposed changes, it is recommended to restart the IIS service

Incorrect fonts & garbled characters when printing through Terminal Services

I recently worked on an issue with a Remote Desktop user who claimed that when printing through RDP, the resulting output would contain incorrect fonts and random characters. After trying multiple versions of the print driver on the local machine with no avail, I found the following hotfix, which resolved the issue in it’s entirety. This appears to be a specific issue with the .NET Framework 3 and Windows XP.


The title description of the hotfix would lead to you to believe that it would have nothing to do with Remote Desktop Services, however, note the second paragraph:

Additionally, when you print from a terminal server session by using the Easy Print driver, there may be problems in the print output. Specifically, there may be unexpected fonts in the printed document, the document text may be compressed or truncated, or the document text may contain unexpected random characters. This behavior may affect print jobs that range from e-mail messages to the “Print the test page” document from the printer properties.

How to Disable Autorun

The Autorun feature in Windows allows removable devices like CDs and Flash Disks to auto-execute an application when they are inserted. An example would be inserting an application CD into your optical drive, and the setup program automatically starting up. This is accomplished by the use of an autorun.inf file in the root of the removable media that directs windows as to what it is supposed to execute.

Unfortunately in the last couple of years, it has been increasingly popular for attackers to call malicious code by inserting/modifying the autorun.inf on an infected Flash Drive for example. This makes it extremely easy for malware to spread through removable media because it can be installed by simply plugging the device in. Most current virus scan products will automatically clean removable media when it’s inserted, however in the case of a brand new virus that cannot be detected yet, this won’t help you.

The best solution is to simply disable the Autorun feature, so that even if you do plug an infected device into your machine, the malicious code won’t run automatically. The obvious downside of doing this is that you will now need to start setup programs or audio CDs manually when you insert them into your computer. A small price to pay in my opinion.

You can accomplish this by modifying the local policy on your individual machine (or for your entire Windows domain though Group Policy). The below instruction is taken directly from KB967715.


  1. Click Start, type Gpedit.msc in the Start Search box, and then press ENTER. If you are prompted for an administrator password or for confirmation, type the password, or click Allow.
  2. Under Computer Configuration, expand Administrative Templates, expand Windows Components, and then click Autoplay Policies.
  3. In the Details pane, double-click Turn off Autoplay.
  4. Click Enabled, and then select All drives in the Turn off Autoplay box to disable Autorun on all drives.
  5. Restart the computer.

Disable Autoplay

MS12-020: Critical Vulnerabilities in Remote Desktop

On Tuesday, March 13th 2012, Microsoft released fixes for two reported vulnerabilities in the Remote Desktop Protocol described in the link below:

Microsoft Security Bulletin MS12-020 – Critical

The fixes for these two vulnerabilities can be reviewed here:


Obviously Microsoft releases critical security updates every month, however the problem that KB2621440 addresses is critically important. By sending specially crafted RDP packets to the target server, an attacker can gain complete administrative control over the machine in question. This is not only a concern for companies running publically accessible terminal servers, but even more critical for all the Windows based cloud servers that use RDP as the primary method for remote administration. When (and I won’t even say if) attackers develop a worm that takes advantage of this exploit, it has the potential to be as bad or worse than anything we’ve seen in the past few years.

Microsoft does mention that if the server is requires Network Level Authentication for RDP connections, the attack surface is drastically reduced. This would require the attacker to have valid login credentials before being able to exploit the vulnerability. While this is a positive, this probably wouldn’t be the case in most instances since it isn’t the default configuration for Remote Desktop Services.

Definitely be proactive about this one, get those servers patched!

Printer Fails to Print Multiple Copies

I just recently discovered that none of my HP Printers would print multiple copies of jobs. Regardless of the number of copies I selected through the print dialog, only one copy would be printed without any sort of error being generated.

Evidentially, this is caused by Mopier Mode being enabled on the printer in question. Mopier mode is an HP function which will allow you to produce multiple copies from a single print job. To successfully use this functionality, it requires that the printer be fitted with the required extra memory to store and produce the extra copies local to the printer. Since most standard out of the box HP printers won’t have this, the function will fail and only print a single copy. To fix this problem, simply disable this feature in the device settings tab under the printer properties dialog.



The Windows Equivalent of Touch

Every now and again I find myself wishing that there was a Windows equivalent of the UNIX/Linux touch command to update time stamps on files. Well, it turns out that there is an easy way to do this, which I stumbled upon by using the copy command illustrated below.

The result of the above operation is an unmodified source file with an updated time stamp.

  • The /b parameter specifies that the file is binary, so you will always want to use this option unless you are stamping an ASCII based file, which is the default.
  • The + parameter is normally used to combine multiple files together, but is used by itself in this situation.
  • In this case, the required destination parameter is omitted by using the ,,

vmnetcfg missing from VMware Player

I haven’t used the VMware Player in a while, however the need recently arose when I had to deploy some virtual machines at a upcoming trade show. After the installation, I noticed that vmnetcfg was nowhere to be found; No shortcuts, and the executable was missing from the VMware Player program folder. For those who aren’t aware, vmnetcfg is an utility that allows you to reconfigure the virtual networks that the player uses. In my case I needed to bind VMnet0 (the bridged network) to my wired network card since it didn’t seem to work correctly with my WiFi network.

It turns out that while the VMware Player installer does not deploy the tool anywhere, it can be extracted manually and added to the installation via the following procedure:

  1. Download and install the VMware Player package as you normally would

  3. When the installation is complete, run the following from a command prompt to extract all of the cab files from the installer package. This will put them into a folder called “extracted” at the same directory level as the installer package
  4. Navigate into the extracted folder and locate the network.cab file
  5. Double Click network.cab and locate vmnetcfg.exe inside
  6. Copy vmnetcfg.exe and paste it into the VMware Player program folder


At this point, you can run the executable, and happily reconfigure your virtual networks. Enjoy!

Read-Only Problems with Word Documents

I encountered a problem recently where users were reporting that documents they were working on in Word 2007/2010 were randomly becoming read-only and preventing them from saving changes. Initially, I was skeptical to this, and rather assumed that a certain subset of the documents were actually marked read-only for some reason. That being said, my first thought was that someone flipped the “Read-Only Recommend” flag on the document, which would not only cause the document to open read-only, but also cause any other files created from that original document to be read-only as well. Not to go off on a tangent here, but you can check to see if this flag is on a document through the Save-As dialog in Word, under Tools\General Options:

To my dismay, one of the example documents the user provided was not marked with this switch, and furthermore, I (and the user) could open the document and re-save it without any problem.

To make a long story short, the document did not open in read-only mode, however at some point during the session (even with a few intermediary saves in between), the document just magically becomes read-only an prompts for a save as. After a lot of searching around the web, the workaround for this problem is to disable the “Allow Background Saves” under the Save category of the Advanced Options in Word.

Despite what you would assume, this feature doesn’t have anything to do with the autosave feature, but rather determines if save operations occur in the foreground or background. With this setting disabled, you will need to wait for Word to finish any save processing before being able to continue typing or working with the document. This really isn’t a big deal, and you would probably only notice a delay when working with very large files.

So far, the problem has disappeared.

Detecting and Removing Rootkits

After spending a good deal of time chasing down and removing an infection of the SpyEye Trojan, I thought it might be fitting to write about detecting rootkits, and some of the free tools that are available to help you do so.

Generally, a rootkit can be defined as a piece of software that is designed to allow continued access to a compromised system for a malicious purpose. In the case of the SpyEye Trojan I mentioned above, it is to collect passwords, banking information, credit card numbers, social security numbers, and other sensitive information from someone using an infected machine. What makes them particularly nasty is that unlike most viruses, which usually have some immediate and obvious damaging effect, rootkits are designed to be completely hidden. A well crafted rootkit will not do any damage to the infected machine, and happily collect all of the above mentioned personal information without the user suspecting anything.

Rootkits typically hide themselves by altering the results being returned from the Windows API to control what the user sees. You might assume that REGEDIT, for example, is a low level tool that allows you to browse the registry directly, however it is rather just an application that requests data from the Windows API and displays the returned data. That being said, a Rootkit could place its startup information under [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run], but you wouldn’t be able to see the information via REGEDIT because it is being suppressed. The same thing can be done with files on disk or running processes, making the rootkit completely invisible in user mode.

To detect the presence of rootkits, you need tools that can bypass the Windows API and look at the information at the lowest possible level to make a comparison. While there are lots of tools that do this, my favorite is GMER. The rootkit/malware scan utility compares the raw data inside the registry and file system with the data that the Windows API returns to find all the mismatches. This gives you a nice list of all the items that are hidden, which will usually reveal most rootkits and offer to remove them for you. What also makes this utility great, is that it also gives you an “untainted” interface to directly view the process list, file system, services, and the registry.

GMER obviously isn’t your only choice (virtually all of the virus vendors offer free detectors), but I happen to like the more hands on approach and the additional browsing utilities that GMER includes.

I would recommend downloading and using these tools frequently, especially if you use your computer for online banking, bill pay, or other sensitive identity related activities.