Kirill Kedrinski - Fotolia
5 steps to evaluate and perform a service mesh implementation
Service mesh adoption is a long process that starts with careful evaluation of the technology and its deployment models. And that requires the involvement of both IT ops and dev teams.
A service mesh simplifies and automates component communication within microservice applications, but given its novelty -- particularly outside the world of cloud-native software development -- many organizations wonder when and how to start with the technology. With the availability of packaged open source service mesh software and managed cloud services, the deployment task isn't especially difficult, but requires research and planning.
Follow these five steps to begin service mesh implementation.
1. Evaluate the need for a service mesh architecture
Most IT architects and developers are familiar with service mesh basics, but they must also consider typical usage scenarios and implementation options. Carefully weigh service mesh pros and cons and understand key elements of the technology, such as how it enables secure communication between microservices, as well as the sidecar proxy model.
Understand the underlying IT needs that drive service mesh implementation. Enterprises use a service mesh architecture with containers, particularly those that run at production scale. To improve the reliability, security and manageability of containerized apps and microservices, IT teams strive to:
- force the use of transport layer security connections between containers in a cluster;
- deterministically route traffic via load balancing and traffic management policies; and
- measure performance and identify bottlenecks and problem areas via application telemetry and tracers.
Developers and architects can complete each of these tasks in a Kubernetes cluster with add-on services. For example, they can use an application layer software load balancer like Nginx, or monitoring software such as Prometheus or Grafana, but these tools add management overhead. Alternatively, a service mesh handles these tasks automatically.
2. Choose a deployment model
Evaluate the range of service mesh offerings, including both open source projects, such as Istio and Linkerd, and cloud-hosted services from AWS, Azure and Google.
Advancement to this step implies an organization is serious about running containerized services, and probably already has a sizeable deployment. In this case, IT developers and architects should have already made initial decisions to host their environment on premises or in the cloud. If not, decide where to run microservices, and if your organization has the necessary skill sets and infrastructure to do so in-house.
The cloud offers the benefits of managed Kubernetes services, which reduce the learning curve and management overhead associated with running the container orchestration platform. Furthermore, in response to increasing interest in service mesh architectures, major cloud vendors also offer a mesh service. Examples include AWS App Mesh, which is based on the open source Envoy proxy; Azure Service Fabric, which was originally proprietary, but now open source; and Google Cloud Istio on Google Kubernetes Engine (GKE), which is a packaged version of the open source Istio platform.
The universal support for Kubernetes and the portability of application configurations across environments pushes more organizations to choose a multi-cloud container architecture. In this case, how to manage multiple environments, including multiple service fabrics, factors into an organization's decision.
3. Select a service mesh platform
There are several open source service mesh software platforms, some of which underpin the cloud services mentioned above. The most popular are:
- Istio
- Envoy
- Kong
- Conduit
Commercial software providers have also developed packaged service mesh products, such as VMware NSX Service Mesh, Red Hat OpenShift Service Mesh and Nginx Controller. Products like Rancher also package the installation and management of Kubernetes and Istio.
Often, an enterprise's decision between on premises or cloud -- as outlined in step two above -- will determine which platform it uses. For example, if the primary container environment runs in a hybrid configuration on both vSphere and GKE, the service mesh will be Istio. Similarly, if the primary container platform is AWS, then AppMesh, with Envoy, is the choice. The free choice of service mesh software is only a reality for organizations that run DIY Linux container environments and are prepared to configure and support open source software.
4. Install and manage the service mesh
In general, it is simple to install and set up service mesh software once you have an operational container cluster -- the packages typically have installation scripts that handle popular container environments. For example, Istio provides setup instructions for 10 cloud and on-premises platforms.
Still, IT admins need to set up traffic management, security, telemetry and application policies. Each of these tasks must follow a detailed set of instructions, using Kubernetes' -- or an equivalent cluster manager's -- configuration tools, such as kubectl, and might require additions to underlying applications or web code. Cloud platforms don't necessarily reduce this work, as the setup, configuration and testing instructions for AWS AppMesh, as one example, are lengthy.
5. Testing, evaluation and production
Service mesh testing is equally complicated and involved, given the many feature modules and parameters a platform such as Istio or Envoy includes. While product documentation helps, prepare for a long evaluation period before a service mesh is production ready.
Efforts to streamline the testing setup and execution process are incomplete and nascent. Examples include the open source Istio Test Framework, which simplifies writing and running tests through a set of code modules, and an Istio Lab GitHub project, which provides sample tests for various Istio features.
Service mesh technology spans both application development and IT operations teams. As a result, it greatly helps the evaluation and deployment process -- especially steps four and five above -- to have a DevOps practice. The factors and options that enterprises must consider to thoroughly evaluate production-scale container infrastructure, in general, and service meshes, specifically, require expertise in various disciplines, which means project and implementation teams must be similarly diverse. Likely, organizations that need service mesh capabilities already practice DevOps and associated processes, such as CI/CD and infrastructure-as-code. If not, service mesh implementation presents an ideal opportunity to embrace multi-disciplinary application development and IT teams.