When things go wrong technically: What can you do?

data center server
Somewhere (I think in Texas) the guts of our business are stored on a dedicated server in a data center. What happens when things don't quite go as planned?

It’s something that has happened rarely in almost a decade of blogging. Outside of a few planned vacations and one day misses, I’ve been here virtually every day, including (despite vacations) advance-planning posts for as long as a week.

But there were no new thoughts here for the past five days.

What happened?

Over the past few months, I’ve been noticing too many systems crashes and an excessively slow run-time for this blog and our network of regional construction sites. As well, Google will soon require SSL encryption for any sites where viewers can, in theory, access through a password link. (In our case, this is mainly an administrative tool to allow for the site updating, but it still is caught under the rules.)

After researching various options, including switching from a dedicated server to a cloud-based model, I engaged with a consultant to complete the SSL encryption set-up and also optimize our sites for speed and efficiency. However, he quickly found another problem: The deep system backend at our server was seriously out-of-date.

The solution: Upgrade to a new server — and in the wonders of ISP pricing — the new (and faster and more reliable) server would be less expensive to operate. As well, our ISP (Hostgator) would provide a free “migration” service to transfer the massive amounts of data from the old server to the new one.

The work was scheduled to start on the weekend and, I was told, would not result in an interruption of service from our existing sites. However, once the migration started, it would be foolish to add any new content, as no new content added to the old server would be registered on the new server once the process started.

Fair enough. The work started Saturday morning, and immediately, I noticed our “old” sites had gone offline. It seems the load of the transfer, coupled with the aging infrastructure on the previous server, created so much load that the system protections kicked in and shut our sites down.

Saturday . . . Sunday — periodically we would regain service, but there was no idea on when the work would be completed. It went on and on — the systems would shut down for several hours, be rebooted, and then go down again. Finally, about noon on Tuesday, the ISP sent the notification that the work had been completed; and I could check that everything is okay, and then (finally) switch the Domain Name Server (DNS) to connect with the new server.

There wasn’t much I could do through all of this but hang on, communicate with staff and contractors by non-domain email, and hope that readers didn’t think we had disappeared for good.

I can’t say yet whether the pain will be worth the gain; the consultant is working on the system optimization now and hopefully it will indeed provide better and more reliable results in the future.

However, I learned a lesson: If you are planning significant IT infrastructure changes, be aware that things may not always go as planned or promised. It is obviously good to have backup systems and security (we do) but I think next time around I will either build more redundancy into the processes, ensure clients and viewers are notified in advance, and we’ll time the work as best as possible to avoid inconvenience.

Did you enjoy this article?