Key considerations for refactoring applications for the cloud
Lift and shift has been a popular migration technique, but sometimes, IT teams have to refactor applications after the fact. Review these options to optimize legacy apps on the cloud.
Organizations lift and shift their workloads to the cloud because it's cheaper and quicker than other migration techniques, but that doesn't mean it's always the best fit.
IT teams often face budget and time constraints, so they believe they have little choice. In addition, they may feel rushed to get applications up and running by cloud providers eager to bill these new workloads. But there are significant disadvantages to a lift-and-shift approach when compared to application refactoring, also known as rearchitecting.
It may be cheaper upfront to simply rehost your application and its data as is on the public cloud, but this approach could ultimately cost more than it would to run a cloud-native app instead. There may also be performance issues caused by changes in the software architecture, missed software bugs and an inability to properly utilize cloud vendors' native services for monitoring, security and governance.
It's often best to refactor an application as part of a migration, but sometimes, organizations do so retroactively. This can happen when performance fails to meet expectations after a lift and shift, and tuning doesn't solve the problem. A migrated application may also benefit from refactoring when bills are unexpectedly high due to application or database inefficiencies or when security vulnerabilities arise because the application can't integrate with native security systems, such as identity and access management tools.
When and how do you retro-refactor?
Organizations must consider several factors before they decide to refactor their apps -- the most important of which is cost. If you can't make your money back from refactoring applications, then it should not be attempted. Consider the previously mentioned signs around cost, performance and security when you analyze the return on your investment.
Also, it's not like refactoring in-flight, during the cloud migration. You have an application that's in production currently, and you have to disrupt that production on the cloud, somewhat, when it's refactored. As a result, you disrupt the user twice.
There's also a great variety of refactoring tools to choose from, and an application's needs will vary depending on what programming languages and databases that app relies on. Generally speaking, however, categories of tools include anything that assists in designing and developing microservices that utilize cloud-native APIs. Moreover, container development and Kubernetes deployment and operations are also a common way to refactor.
Organizations also have several ways to refactor their applications for the cloud.
A complete refactor is when more than 50% of the code is changed and the database is updated to utilize as many cloud-native features as required by the application. This strategy can improve performance, operations costs and IT teams' ability to meet the needs of the business. However, the process could be too costly or complex, and it can introduce bugs.
Minimum viable refactoring prioritizes speed and efficiency, as it requires only slight changes to the application. Users who take this approach often incorporate cloud-native security, management and perhaps a public cloud database into their migrated workload.
This approach doesn't offer as many cloud-native benefits as a complete refactor, but it may have some legs. For example, Philip Potloff, head of enterprise strategy at AWS, has a two-week rule for refactoring applications. This technique, he suggested, gives cloud migration teams enough time to refactor what is determined to be the "most problematic compatibility issues with the cloud."
Containerization refactoring is done when applications are moved into containers with minimal modifications. The applications exist within the containers, which enables users to incorporate cloud-native features and improve portability. Enterprises have found this option is more work than expected because of the learning curve that comes with adapting to these new tools. With that said, costs and refactoring times continue to go down due to the popularity of containers and their growing ecosystems.
A serverless application refactor has similar issues as containerization in that it changes the development and operations platform, which requires learning new tools and skills. Some modifications are required to make the application work effectively and take advantage of serverless systems on the public cloud. For example, serverless platforms support most languages, but they don't support everything; the same goes for databases.
Unlike containers, serverless platforms don't provide portability, so lock-in is a major downside to this approach.
Where to go from here?
Enterprises should bear in mind that, generally speaking, most applications and data sets that can move to a public cloud require at least some refactoring. However, this should not involve major surgery that lasts for months, because most of those efforts won't be economically viable.
Users should at least do minimum viable refactoring for most of the cloud-hosted application. Potloff's two-week target may be too aggressive for many IT teams, but there should at least be some hard-and-fast deadlines in place. A targeted timeline will keep teams out of trouble and ensure that most of the refactoring work will have a quicker return on their investment.