6 Lessons Learned During a Real-World Azure Migration
My articles usually consist of looking at new and improved technology offerings from Microsoft and others, but this time I thought I’d cover something a little different.
In my role as a business owner and IT consultant, I get the opportunity to implement on-premises technologies (Windows Server and associated services), and cloud services such as Microsoft 365 and Azure for businesses. My business is a Microsoft Cloud Service Provider (CSP). Over the last few months I’ve been involved in a project where we migrated a small business IT infrastructure to Azure and as the migration nears its end, I thought I’d summarize the process and lessons learned along the way.
Before Christmas last year a colleague invited me to the first meeting with the client, a small accountancy firm here in Australia — let’s call them Progress Accounting to protect the innocent.
They have just over 10 staff members in a main office, plus two smaller branch offices, and had five servers running in a VMware-based hosting environment in Brisbane (approximately 70 miles away) and Office 365 Business Premium for email and collaboration. Their current hosting company was changing its business model and had asked them to move their servers “somewhere else.” There was pressure to do this as soon as possible from the hosting company, but an office relocation forced a few months delay. The hosting company had run Azure Migrate, which is a free service to assess current workloads and the suitability for moving them to Azure, along with specifying equivalent VM sizes in Azure and expected monthly costs. Figure 1 shows a screenshot of the report that we initially started with.
fter careful consideration and the realization that their LOB application and data was already protected with Multi-Factor Authentication (MFA), native to the application, we made the decision to use local Windows accounts.
Lesson 2: The current way of doing identity isn’t always the best or only way forward.
This left two Windows Server 2016 servers that needed to be migrated, one that was running Citrix, which we were going to migrate from and convert to a Remote Desktop Services Host (RDSH). All the users were connecting to this server for their day-to-day activities and to a second Windows Server 2016 running SQL Server, which was supporting their LOB applications. There was also a shared file server, but we decided to migrate their file shares to the SQL Server, limiting the number of servers to manage and maintain.
The hosting company offered to do the migration over Christmas for free to speed up the process. We provided configuration details and access to the Azure subscription that we’d set up for the client. After some weeks of silence, we checked in on the progress and found that none had been made, ostensibly due to “lack of time.”
Lesson 3: Don’t rely on the previous IT provider to do the job right — or do it at all.
The Migration Phase
As we didn’t have access to the hosting infrastructure, we asked the hosting company to install and configure Azure Site Recovery (ASR) for VMware migrations to Azure — we could then access the Process server/configuration server that’s downloaded as an Open Virtualization Application (OVA) template. Replication was set up and started for the two VMs. We had one troubleshooting incident that involved Azure support as installing the replication agent a second time over an already installed agent breaking the trust between the agent and the process server.
Lesson 4: Don’t just reinstall to fix problems — sometimes that causes more problems.
After the VMs at the hosting company and the disks in Azure were in sync we did test failovers to make sure that the VMs worked as expected in Azure. We moved them to a workgroup environment and set up local accounts for all users, added entries in the host’s file to make sure the two servers could find each other, uninstalled Citrix and added RDS client licenses, linked existing user profiles to the new local accounts, and re-created file shares for databases and document shares.
One issue here is that there’s test failover in ASR designed to make sure that you can fail over to Azure when you have a disaster (the main use case for ASR), but those VMs are designed to be deleted after testing. We wanted to keep one of them, as the work was too time consuming to repeat, something that Azure didn’t provide for easily.
ASR is free for the first 31 days for each VM to help you migrate to Azure, after that a per-instance fee applies.
Network Security Groups were used to lock down access to the servers, and password policies (especially lockout policies) were established as these servers would be accessed directly over the Internet. We were now ready to go live with the new environment.
The Big Day
On site at the client’s office we reconfigured all client computers to point to the new servers, ensuring that each user could log on successfully to the new environment. One hiccup was printing: In their old environment there was a Citrix-specific link back to their multifunction scanner/printer on-premises — we had to rely on printer redirection from a local printer on each workstation being projected into the RDP session.
Lesson 5: Never forget about printing — it’s still essential — 20 years after the “paperless office.”
This device was also used to scan documents that were then sent via FTP to a watch folder on the server — this had to be reconfigured with new passwords as no one (including the current IT provider) had any idea what that password was.
Lesson 6: Always expect the unexpected — especially when it comes to non-existent documentation.
To protect the main server’s RDP access, we implemented Duo security’s MFA solution as the users were already comfortable with using MFA. We used DNS names for the public IP addresses in Azure and connected via the names instead of using IP address, as these change in Azure.
This set up has now been in production for a couple of months. The users are happy with the performance, log-in time for some users with large profiles have gone from 20+ minutes to 10 seconds.
One issue is that ASR creates premium (SSD-based), managed disks in Azure (if you configure it as such), where you pay based on the size of the disk, not the amount of data stored. As disk sizes are based on the migrated VMs disk sizes, these were found to be too large and unnecessarily costly (Figure 3), but you can’t reduce these disk sizes in Azure (you can only increase them). The solution was to attach new, smaller managed disks and copy the data from one disk to another.
Also note that host caching for ASR-migrated disks are set to none by default, the recommendation for data disks is to use read caching to improve performance (when using premium SSD).
We also use Azure Backup to back up each VM daily (hourly backups of the SQL Server log files), keeping the backups for 30 days, storing the data across both public Azure regions in Australia, providing decent disaster recovery in the case of a prolonged region outage. A better solution would be to use ASR between the regions, but it’s costlier for the client and they don’t have a business requirement to be up and running in minutes after a disaster.
To save on compute costs we use Azure Automation to turn off the VMs at 9 p.m. each night local time and then turn them back on at 5 a.m. each morning.
All in all, it was a successful migration with a good outcome for the client. We’re now looking at switching to reserved instances for cost reduction and because they upgraded their Office 365 Business to Microsoft 365 E3, we’re going to implement Mobile Application Management (MAM) policies to protect data on personal mobile devices. We’ll also be upgrading all their workstations to Windows 10 Enterprise to unlock further security improvements. I hope you found this overview of a real implementation of public cloud computing useful.