Strike a cost-performance balance in microservices deployment
To successfully deploy microservices in the cloud, IT teams should beware of these four common mistakes around resource locations, service selection and more.
Microservices adoption can lead to cost explosions and performance black holes when the deployment stretches across clouds -- but it's not the technology's fault.
It is crucial for application development teams to understand how to balance the cost and benefits of microservices, particularly in multi-cloud. Without proper assessment, developers risk building something their company can't afford to deploy.
Performance and cost problems in a microservices deployment arise from four sources: an excessive number of microservices; failure to optimize instance locations during scaling; improper selection of cloud providers and services; and inefficient placement of resources. These factors -- in particular the last three -- are difficult to control in multi-cloud.
Too many microservices
Microservices benefits include runtime code and component reuse, as well as scalability for applications under load. However, since each microservice is hosted independently, the application's cloud costs are often higher than a more aggregated model of the same code. In addition, workflows must thread their way between microservices, which can generate transit delays -- and may increase costs in multi-cloud -- as workflows move across cloud provider boundaries.
To avoid these problems, don't deploy microservices iteratively for the same transaction. Put as much logic into a given microservice as possible, while still reaping its reuse and scalability benefits. Also, scale multi-cloud microservices without adding border crossings from one cloud host to another.
Too many clouds
Border-crossing issues in a microservices deployment result from failure to optimize the location of instances when scaling. To solve this issue, scale microservices within the same cloud, not across clouds.
Cloud boundaries aren't absolute. IT teams can use the data center to back up a cloud provider or one cloud to back up another. But, under normal operations, concentrate cloud microservices instances rather than scatter them across a bunch of different hosting options.
Too many options
Cloud providers offer multiple hosting options for a microservices deployment, including serverless and managed services, but not all microservices fit every hosting choice.
Serverless hosting can be expensive if the microservices for an application run almost continually. On the other hand, managed hosting is hard to justify if a business uses internally hosted orchestration and management tools across its multi-cloud deployment.
Both the serverless and managed hosting options also introduce performance issues. When a cloud provider uses a microservices resource pool or a serverless work distribution layer, the microservice may spin up on demand, or a new instance may take time to deploy. Either way, this adds latency to a workflow -- sometimes enough to make transaction processing unreliable.
Many multi-cloud users find the cheapest and best-performing microservices hosting choice for multi-cloud is containers on IaaS. Public cloud IaaS offerings are more compatible among providers compared to serverless and managed microservices hosting. DevOps and orchestration tools -- Kubernetes in particular -- can help teams manage multi-cloud container hosting without too much integration hassle. Just be careful not to lose track of where you host microservices instances -- or the relative placement of the resources they use -- as this can create border-crossing issues.
Too many resource locations
Every application has a set of activities that it handles. Developers must map the workflows associated with these activities across the components of the application. Workflow mapping shows data paths, which reveal the cloud microservices that are invoked in key activities.
To determine the microservices that impact cost and performance the most, associate mapped workflows with usage volumes and business importance. Those high-impact microservices may have hidden resources they depend on, such as databases. When a microservice redeploys after a failure or scales under load, it must re-establish connections to any resources it uses from its new location.
These hidden resources can create major problems during scaling and redeployment if users aren't mindful of their hosting location and how they'll connect to new microservices instances. Some designs should duplicate connected databases or other resources to ensure there's one in each of the clouds. However, with duplicated databases, one cloud cannot back up another, causing problems with back-end state control.
One of the big benefits of microservices as a mechanism for code reuse vs. simple modular programming is that reuse is at runtime, and a single instance or set of scalable instances can serve many applications. The problem is that organizations must consider all the applications that share all these instances across all clouds in a microservices deployment. In some cases, the requirements are difficult to harmonize, but it's a balancing act development teams must master -- before the first bills arrive.