Pages

Senin, 18 Februari 2013

Virtualising a server to retain legacy software


Takeaway: Virtualisation can buy you a little more time when you can’t face replacing that aging application. Mark Pimperton describes how and why his company did it.
We never run servers outside their manufacturer warranty, and certainly no longer than five years. (If you have physical servers older than that, I hope you have a good risk management strategy in place.) Six months ago I knew we were due to replace our oldest application server in January 2013. Trouble was, the server runs our time and attendance (T & A) software, which won’t run on any OS later than Windows Server 2003. Although it’s often this kind of situation that leads to companies replacing applications, we had neither the time nor the inclination to do so.
A few years back I might have considered buying a new box and trying to make the old OS run on it directly. That can be fraught with difficulty, though, if drivers for the old version of Windows don’t exist for the latest hardware. Nowadays the answer is obvious — just virtualise the old server and run it on any suitable hypervisor.

Support

There are a few points to consider when adopting this strategy.
  • Communications: Does the old hardware rely on USB or serial communications? Although there are solutions for this with virtualisation, they may not be straightforward. Our time and attendance software talks to time clocks, but they’re on the network identified by IP address. We also had a serial connection to our telephone exchange for logging phone call data, but a call to the exchange manufacturer pointed us to a much more capable IP-based logging solution.
  • Support: If you have a support agreement for your legacy application, will the software vendor support it in a virtualised environment? Quite often the “official” answer has been no, simply because the vendor hasn’t tested the software on a virtual machine. (Our ERP vendor claim that they only “officially” support VMware, even though the system has been proven to run fine on Hyper-V. And if you read the small print, if you’re on VMware, they still reserve the right to decide that the virtualisation platform is causing a problem and ask you to re-test on a non-virtualised system!) In the case of our time and attendance application, we didn’t ask about support, as we knew they’d just say it hadn’t been tested.
  • Licensing: While moving to a virtual platform can present licensing nightmares for some applications (Oracle, for example), our requirement was obvious. The T & A application license is a site license so it  presented no problem. The server OS, however, was an OEM license bought with the hardware; as such, when the hardware goes the license goes. Fortunately our chosen host OS (Windows Server 2008 Enterprise) includes licenses for up to four VM Windows operating systems, so our virtualised legacy server would be covered by one of those.
  • Hypervisor: I’ll leave you to read for yourself about the “Hyper-V vs. VMware” debate. For us the choice was easy, because we were already working with Hyper-V for some development machines, and although I’d briefly worked with VMware, I didn’t find it easy. Familiarity counts for a lot.

P2V

Our physical-to-virtual (P2V) tool of choice was our backup system, which creates complete bootable images (which I’ll write more about in a future article). In the past I’ve also used a Microsoft tool called Disk2vhd, which worked well. The process for us consisted of firing a backup and, once complete, shutting down the physical server to prevent any further data changes. After exporting the images as Virtual Hard Disk (.VHD) files, they were imported into Hyper-V on our host server.
On booting the new VM (initially isolated from the network), the first hurdle we faced was that Windows Server 2003 demanded to be re-activated. Since the server wasn’t on the network, the only option was the phone route; this involves calling a Microsoft number and following the automated prompts. Provided you have a genuine Windows product key to begin with, this is a straightforward, if somewhat long-winded, process.
After some off-network checks of the installed applications, installing the Hyper-V Integration Services, checking the computer name, and setting the IP address, we connected the VM to the network and began our tests. Although it looked good on the face of it (especially as our T & A software seemed happy), we soon found a string of problems:
  • No Sophos Anti-Virus clients could update.
  • Clients trying to use shipping software for one of our couriers also couldn’t connect.
  • Event ID 2011 in the System log giving warnings about the “IRPStackSize parameter.”
  • Event ID 1030 in the Application log saying it couldn’t read Group Policy objects.
I tried browsing to the share used for updating Sophos. Access was denied. I then realised that access was being denied to all shares on our newly-virtualised server.

IRPStackSize

Searching on that symptom pointed me to the IRPStackSize parameter being the culprit. I added a registry key as per this TechNet article and set the parameter to 20 as suggested here. This made no difference. I tried increasing the parameter further and still nothing. Then I found another articlesaying a reboot was necessary to apply the registry change. After the restart everything started to work.
I can’t tell you what IRPStackSize is about. Nor do I know why it was fine on our physical machine but wrong on our virtual copy of the same machine. Frankly, I don’t care. Sometimes you just need stuff to work without necessarily understanding how.

Summary

Virtualising our legacy server has enabled us to keep our old application running until at least 2015, when Microsoft ends support for Windows Server 2003. The migration process was relatively straightforward, despite one very strange problem that prevented file shares being accessed.

0 komentar:

Posting Komentar