Get started with this Azure cloud migration tool Learn the benefits and limits of a public cloud migration
Tip

Good cloud migration planning helps avert downtime

Is your on-premises application ready to function in the cloud? The right preparation will help you to avoid downtime during a cloud migration.

Organizations have ample choices for how to migrate applications and data to the public cloud. But migration often entails downtime, which can be distressing for users and costly for the business.

Workload availability and performance are at the forefront of any cloud migration planning. Administrators must plan and execute efficient migrations that minimize application disruptions and preserve data integrity.

Plan with purpose

A key aspect of cloud migration planning is the selection of applications (and associated data) that can be migrated individually, which mitigates disruption to users and the business.

In terms of resource demands and importance to the business, workloads are not equal. Some workloads and data are simple and low-priority, while others are complex and mission-critical. Also, certain workloads may be unsuitable migration candidates because of security or compliance requirements. Thus, the first step is to select a workload for cloud migration. Organizations typically start with low-priority workloads -- the classic low-hanging fruit.

It's also crucial to identify workload dependencies, and prepare to migrate those dependencies first. Otherwise, the workload may not function properly after migration -- or may not function at all -- resulting in unexpected disruption and downtime.

For example, if a workload relies on access to a database, it's impractical to migrate the workload to the cloud and expect that it will continue to use an in-house database. Instead, perform a database replication or migration before the actual workload migration. It's likely that a workload has many other dependencies to consider, such as application performance monitoring and backup and disaster recovery mechanics.

Protect data assets

As organizations conduct their cloud migration planning, it's important to include plans for data protection. Robust data protection ensures that a workload can continue to function even when the main data set is compromised.

If a workload relies on access to a database, it's not practical to migrate the workload to the cloud and expect that it will continue to use a local database running in-house.

First, back up the local data set before you implement the migration process. This provides another working copy of the data in the event that problems with a migration process impair data integrity or continuity. You can easily restore the data set from the protected copy, and the corresponding application can continue to function locally if necessary. This will help minimize downtime while you investigate and resolve any migration problems.

Also, organizations should not overlook ongoing data protection once a workload is fully migrated to the public cloud. Just because a data set resides in the cloud, that's no guarantee of its reliability or availability. In addition to the traditional worry of disk failures, remember that application faults and malicious events can potentially compromise application data. Part of effective cloud migration planning is to explore a cloud provider's default storage protections. Weigh those defaults against the needs of the business, and then take the time to design additional protection for the data set once it's running in the cloud.

Preload large data sets

Regardless of the actual approach used to migrate workloads and data to the public cloud, the time required to execute the actual move can pose a serious risk for downtime. Even for aggressive migration approaches, such as master-replica or multi-master migrations, the potential need to quiesce, or pause, an application or data set to ensure its copy continuity can result in downtime. That can be intolerable for huge multi-terabyte or petabyte-sized data sets transmitted over a typical public internet connection.

Large data sets demand more creative thinking. Administrators can save significant migration time if they preload the bulk of the data set into the cloud first, before the actual migration process. For example, use a recent backup or snapshot of the data set as the copy source to the cloud rather than the working data set, and then sync data afterwards as part of the migration process. This can eliminate the need to quiesce the workload or data set for long periods during the copy process.

Organizations can further accelerate data transfers with technologies such as Amazon Direct Connect and S3 Transfer Acceleration. Physical disk transfers such as AWS Snowball and AWS Snowmobile offer another alternative for petabyte or exabyte data transfers. And interface technologies such as AWS Storage Gateway can connect local storage resources to the AWS cloud. For those that choose to go with a migration to Azure, there's Azure Data Box and related products. Google Cloud Platform, meanwhile, offers Transfer Appliance among other technologies to aid in data transfer.

Consider migration tools

Minimized disruption and downtime often hinges on the completeness of a cloud migration plan as well as proper execution. This can easily fall short when a cloud migration is implemented manually. Public cloud providers offer services designed specifically to aid migrations -- which often allows for better planning, scheduling and automation. These can reduce the threat of disruption and downtime during a migration.

AWS has a range of services, including the AWS Database Migration Service, AWS Server Migration Service, AWS Migration Hub and a variety of offline data transfer appliances (such as AWS Snowball). As one example, AWS Server Migration Service is intended for large-scale migrations, so an organization can automate and monitor the replication of up to thousands of server volumes to AWS instances, thus easing the planning and coordination burden of major migration initiatives.

Microsoft Azure provides the Azure Migrate service, as well as a Database Migration Service and Data Box hardware for offline data transfers to Azure.

Google's migration service offerings include Migrate for Compute Engine. The company also relies on service partners to help users with database and other migrations into Google Cloud Platform.

Various third-party migration tools/services include Carbonite Migrate, Corent SurPaaS and Micro Focus PlateSpin Migrate. Third-party migration tools can be attractive because they typically support migrations to or from multiple clouds -- this brings greater flexibility to organizations with multi-cloud strategies, and keeps the number of tools or services in use to a minimum.

Remember that tools may offer automation, but tools are not automatic. The use of migration tools, whether from a public cloud provider or a third-party vendor, does not guarantee migration success. As with any business tool, evaluate prospective tools in advance, select the most appropriate tool for the migration objectives, and be proficient in the tool's use before you commit to a particular one.

Select the most effective migration tactics

A master-replica approach is the most common approach to public cloud data migrations. This essentially creates a copy in the cloud and maintains synchronization with the working data set until the actual application is migrated and cut over. Data copying demands substantial amounts of time, which equates to potential downtime. It's usually best to create a replica environment well in advance of any migration process.

If the data set must be quiesced in order to create a copy, it may be least disruptive to create a local backup of the data set first and then use the backup to create the replica in the cloud. Synchronize the data sets after the replica is created to save significant time.

Extremely large data sets may benefit from offline data transfers as discussed above, and then synchronize the data sets when the offline transfer becomes live in the cloud. If an application is involved, migrate the VM, configure it to accept traffic and use the replica data set, and then cut over at a predetermined point.

It is also possible to employ a master-master approach for public cloud migrations, particularly when the focus of the cloud migration is the application rather than the data. The master-master tactic resembles clustering, where duplicates of the application and data are implemented and load balanced in the cloud. But those run concurrently and share the load with the existing local deployment. The master-master approach imposes a more complex deployment scenario because data sets must still be synchronized to ensure data consistency. Still, a clustering approach can be a good choice for organizations that need local fallback capability or require extensive validation of the cloud environment before making a final switchover to the cloud.

Consider application monitoring

Your cloud migration planning will understandably focus on switching from on premises to the cloud, but it's not enough to simply move a workload. The workload must continue to function within acceptable performance parameters to ensure and maintain user needs and business SLAs.

Application performance monitoring (APM) tools such as New Relic APM, AppDynamics SaaS, Datadog APM and others can be applied to cloud workloads to gather, track and report on key application performance metrics. APM results can confirm that a migration is successful, and when performance problems occur, application stakeholders can quickly identify the issues and start effective troubleshooting to correct any problems.

Dig Deeper on Cloud app development and management