sysadmin and other technological pontifications

Enabling Network Level Authentication for RDP in XP SP3

Windows Vista/7/2008 has the option of requiring Network Level Authentication when acting as a Remote Desktop host. Without going into any great detail, NLA offers a higher level of security for your RDP sessions, and a lower resource requirement during the authentication process.

This setting is controlled by the dialog below under [My Computer Properties\Remote Settings], and during the Remote Desktop Services setup procedure if you are building a Terminal Server.

You may notice that RDP connections from a Windows XP workstation to an NLA host will fail with the following error:

The remote computer requires Network Level Authentication, which your computer does not support

This, of course, could be rectified by disabling the requirement for NLA on the Remote Desktop host, however NLA support can be very easily added to Windows XP SP3 by making the following changes to the Windows Registry (Note that the following instructions below are copied directly from KB951608:

  1. Click Start, click Run, type regedit, and then press ENTER.
  2. In the navigation pane, locate and then click the following registry subkey:
  3. In the details pane, right-click Security Packages, and then click Modify.
  4. In the Value data box, type tspkg. Leave any data that is specific to other SSPs, and then click OK.
  5. In the navigation pane, locate and then click the following registry subkey:
  6. In the details pane, right-click SecurityProviders, and then click Modify.
  7. In the Value data box, type credssp.dll. Leave any data that is specific to other SSPs, and then click OK.
  8. Exit Registry Editor.
  9. Restart the computer.

After performing the above, you shouldn’t have any trouble connecting to an NLA host from an XP Workstation.

Installing Telnet on Windows Vista/7

This is hardly breaking news, but I just randomly noticed that the Microsoft Telnet client (Command Line) can be added back to a Windows Vista/7 installation. The Telnet client is installed by default on Windows XP and previous, but I had assumed that Microsoft removed it entirely from Vista/7 being that it’s an insecure protocol. While I don’t use or advocate telnet under any situation, it can still be really handy to test ports when troubleshooting firewall configurations and things of that nature.

You can go ahead and add it back via [Control Panel\Programs and Features\Turn Windows features on and off]

Adjusting DPI in Terminal Services

I recently deployed a new Terminal Server running 2008 R2, and a few of my users inquired on whether they could adjust the DPI settings to make the fonts larger. To my dismay, these settings appear to be disabled for users logged in via Terminal Services.

After a little searching, I was surprised to find out that there was not a standard Microsoft solution to this problem. After a little more searching, I found an alternative solution on Oracle’s Blog which worked perfectly.

A big thanks to Bradford Lackey for an easy workaround.

UPS Shutdown with ESXi

Setting up your servers to properly integrate with your UPS units and shut down cleanly in the event of a power disruption is something that usually doesn’t take more than 5 minutes in the physical world. Doing this properly for all of your ESXi Virtual Machines, however, can be a bit more involved depending on what type of shutdown software is available for your UPS.

I recently expanded my virtual environment with a few new Dell R610’s, and decided to try out the Dell branded UPS units. I also purchased the optional network interface cards for them since shutdown via a USB cable isn’t an option for guest OS’s in ESXi.

It’s worth mentioning that one of the significant differences between ESX and ESXi is that the latter no longer has a service console. The console was quite useful since you could use it to run scripts and install services, one of which being UPS shutdown services. VMware’s answer to this problem is the vSphere Management Assistant (vMA). vMA is a free Linux based virtual appliance that can be added to your inventory to perform all the tasks that the old service console used to.

My new Dell UPS did have a ESXi package that could utilize vMA to shut down my guests, however the documentation was terrible, and Dell technical support was no help to clarify the setup questions I had. Due to my short attention span, I decided to come up with my own solution utilizing the standard Windows shutdown software I’ve used a thousand times before. Here is what you need:

  • A Windows based VM (Very minimal resource allocation)
  • Windows based UPS software (Dell, APC, etc)
  • VSphere CLI installed to the Windows VM
  • VMware Tools installed on all the guests you wish to shut down

You can go ahead and setup the shutdown software in Windows as you normally would, according to whatever your shutdown time frame is, but the key is to specify a batch file to be executed before the shutdown occurs. This script contains the CLI commands that will shut your Virtual machines down properly. The example below will shut down 2 Virtual Machines, then the power down the ESXi host:

cd \
cd “c:\program files\vmware\vmware vsphere cli\perl\apps\vm”
vmcontrol.pl –server <esxi host> –vmname <vm1> –username <esxi username> –password <esxi password> –operation shutdown

cd \
cd “c:\program files\vmware\vmware vsphere cli\perl\apps\vm”
vmcontrol.pl –server <esxi host> –vmname <vm2> –username <esxi username> –password <esxi password> –operation shutdown


cd \
cd “c:\program files\vmware\vmware vsphere cli\bin”
vicfg-hostops.pl –server <esxi host> –username <esxi username> –password <esxi password> –operation shutdown –force


The sleep command is not included as part of windows, but you can grab this executable from numerous places on the web, or direct from Microsoft as its part of the Server 2003 Resource kit. You don’t need to sleep command to create delays in Windows batch files, but I’m lazy. The end result here is that when the UPS software triggers a shutdown, it will execute the above script first and shut everything down clean.

It is worth mentioning that you should use the IP Address of the ESXi host rather than the DNS name in the script. If your DNS servers shut down before this script executes, the script will stall because it won’t be able to resolve the name of the ESXi host. I shot myself in the foot with that one the first time I tested it.

High RPC Latency in Outlook

Microsoft has published lots of knowledge base documentation that explains the multitude of reasons for poor Outlook performance and high RPC latency [with Exchange Server]. I recently had this troubleshooting experience with one of my hosting providers that I felt compelled to share.

The environment is as follows:

Server 2008 R2 Terminal Services running Outlook 2010 on domain A
Exchange Server 2010 running in domain B (hosted Exchange style setup)
Domain A and B on same effective LAN, Outlook running RPC over TCP/IP in online mode

The RPC delays were horrible (into the 6000ms range), which made Outlook nearly unusable:

An identical setup with the same Terminal Server, but using mail accounts from a different hosted mail domain [running Exchange 2007] did not exhibit the problem at all. Countless hours were spent troubleshooting, Microsoft PSS came up snake eyes, and we ended up rolling the users back to the Exchange 2007 environment that did work.

A few weeks later after re-investigating the problem with some test accounts, disabling the following check box in Outlook completely fixed the problem and brought the RPC latency right back under 100ms:

Even though Desktop/Instant search was disabled in Outlook, just having the above check box enabled caused the problem. My guess is that Outlook is pre-indexing all of the e-mail, so that in the event that you do enable it, the search index is available right away.

RPC Latency after disabling this switch:

The hardest, most dire IT problems always turn out to be the dumbest one-click solutions in the end.

Recursive Removal of Zone Information

Since SP2 for Windows XP, downloading a file from an internet source will attach zone information to the file, which marks the file as coming from an untrusted source. You can see this by bringing up the properties dialog for the file in question.

This can cause irritating prompts, particularly if they are Office documents, because Word will open them up in protected mode. This makes the user have to click through a warning before being able to edit them. Okay, so just click the Unblock button on the file and be done with it right? This of course works, but what happens when you have 2GB of files, all flagged in this manner?

I ran into this situation when I extracted a huge downloaded zip file containing about 7k files. I didn’t remember to unblock the zip file, so all of the resulting files carried the zone information with them. Re-extracting the files wasn’t an option because users had started to work with the files for a few days.

Sysinternals to the rescue. They have an awesome tool called Streams that will let you recursively remove the zone information for all the files in a structure by issuing the following command at the root of the folder

Poof. Zone information gone.

Setting Wallpaper via GPO in Windows 7/2008 R2

Doesn’t work.

This little bug drove me insane for a week as I’d been trying to figure out why my default wallpaper wasn’t being displayed for any of the users on my 2008 R2 Terminal Server. I had originally convinced myself that it was Terminal Server related, so all of my web crawling sent me in the wrong direction entirely. It turns out that this is a known Group Policy issue with Windows 7 and 2008 R2, as described by KB977944

I opted to go with the workaround, and just populated the Wallpaper string under [HKEY_CURRENT_USER\Control Panel\Desktop] for each user logging into the TS. All fixed.

Microsoft, the Underdog

Something very significant is happening in the consumer computing market right now. For the first time in history, a growing majority is considering Microsoft to be an underdog, second to the likes of Apple. 10 years ago, I probably would have laughed if someone suggested that the unstoppable deathstar could be challenged by anyone, but things have taken a turn that I don’t think many people would have predicted.

The ironic thing about the future of computing is that it’s dependent on people who aren’t technically savvy. Computers and devices that are both trendy and easy to use win people over, and Apple really captured the market when they released the first iPod in 2001. While it didn’t happen overnight, everyone on the planet has an iPod now, and Apple has brilliantly continued to create products that people are willing to donate extra organs to have. What inevitably will start to happen (and is already underway) is that people will start to buy Macs over PC’s. If the younger generations are coming out of the womb with their iPods attached to their heads, when it comes time to buy a new laptop, there is little incentive to even consider a PC if everything else in their life is already Apple.

The fact of the matter is that Microsoft just isn’t very innovative, and they aren’t capturing the attention of the consumer market. Now, before I get accused of being an Apple fanboy, I admit that I still prefer the Windows environment for my own reasons, and have no plans of jumping to the other side of the fence (Well, maybe I’ll get an iPad). Regardless, it doesn’t take much to realize that the writing is on the wall, and we’re all in for some big changes over the next few years.

SunGard’s E-Mail Availability Service

Finding an effective disaster recovery solution for Microsoft Exchange Server can be challenging, especially for a small business with a limited infrastructure. One possibility that comes to mind is to move the Exchange environment off-site entirely into a hosted setup, however this might not be the optimal choice depending on your business requirements. In fact, there are a lot of good reasons to keep Exchange in house, as long as you have a competent administrator who can maintain the environment.

When I began my own search for a DR solution, my requirements were fairly simple. I needed a cost effective, off-site failover e-mail system that could take over in the event of any sort of local failure on my own network. SunGard’s EAS seemed to provide exactly what I needed, and being a managed service, made implementation and maintenance a piece of cake.

Installation highlights are as follows:

  1. Install the EAS Controller software to a dedicated box onsite
  2. Add a secondary MX record that points to SunGard’s backup network
  3. Deploy the system to the users

The EAS Controller software keeps your Exchange environment synchronized with the backup servers, which includes all the mailboxes, aliases, contacts, and calendars. An installation engineer from Dell works with you from start to finish on this setup, and the only prerequisite is a mail-enabled service account for the software.

A secondary MX record is required, that in the event you activate the service, your mail can flow to SunGard and be accessible by the users.

User deployment is simple, and is done via a web control panel. The system will send out a custom e-mail to all the mailboxes prompting each user to create a password and supply an alternative e-mail address. This e-mail address is used when the system is activated, and reminds the user of the URL to go to and retrieve their mail. There are a multitude of additonal options that can be configured for the users and the site as a whole, however I’m not going to go into any detail on those here.

The user side experience depends on what type of DR situation you are facing:

  1. Localized problem with Exchange (Internet Connection and user workstations are available)
  2. Complete site failure (Servers/Workstations down due to extended power outage)

Situation 1 is where EAS really shines. Assuming your users are running Outlook in Cached Mode, SunGard has an Outlook plug-in which will automatically re-direct the mailbox to the backup environment. The end result is that the users won’t even notice that Exchange is down, and mail will continue to flow to the user’s inbox. Any received messages can be forwarded/replied to just like any other message.

Situation 2 involves a more complete failure where the local site is competley unavailable. After the service is activated, a notice containing a URL is sent to each user’s alternate e-mail address as mentioned above. The URL will take them to a webmail system that contains all the incoming e-mail, along with all of the user’s calendar and contact data. User’s can work via this webmail system until the local site returns to service.

Both the situations mentioned above involve a recovery process after the disaster is over to synchronize the e-mail back to the Exchange mailboxes. This process is completed from the EAS Controller software, which pushes all the mail back via MAPI.

While I haven’t had to use the system in a disaster situation yet (knock, knock), I’ve done a few test activations, which worked extremely well. I’m very impressed thus far, and a steal at $6-$8 dollars per month, per mailbox.

Coming Soon…

The site is up, first article on the way soon. Stay tuned!