imtmphoto - Fotolia

Tip

Manage containerized microservices with a service registry

It's easier than ever to spin up a containerized app, but that ease has created a problem: How to keep track of containerized microservices. Turn to service registries and orchestrators.

It's not easy to build an application that consists of multiple microservices. Different development groups work on different services, and the app might also rely on third-party services to fulfill certain needs. Microservices are flexible, but that can make it challenging to keep track of where they are.

Cloud services and containerization have made location independence possible, but application management more complicated. It's relatively simple to spin up a containerized microservice on a platform -- regardless of its location. The software team can run an instance on a private cloud or on AWS, Microsoft Azure, Google Cloud Platform or elsewhere, because containers are designed to work regardless of the underlying physical infrastructure.

However, what happens when one service must call another service, or has a dependence on a specific storage system to carry out its function? This kind of connection is relatively simple in monolithic, nonvirtualized designs: It relies on physical logical unit numbers for storage and physical application interfaces based on fixed, static IP addresses. Hard-coded linkages make sense for immobile elements. And applications that are monolithic tend to require few connecting links. Now, not only can a containerized app consist of dozens of microservices, but each one can be anywhere and move to anywhere else -- even in real time.

Requirements for containerized microservices management

The main things to implement for containerized microservices are virtualized links and calls. IT must manage how links are automated.

To run microservices in containers, use a form of lookup table, which translates a need from a calling service into a suitable response from another service. For example, Service A must store data; it states the type of storage it needs and the lookup table identifies what is available. The lookup table then binds the call to that storage resource at a virtual level during provisioning. If the storage is moved, the lookup table maintains the dynamic link -- even though the physical position of the storage resource has changed.

Another benefit of virtualized links is that multiple back-end services can provide load balancing -- and intelligence can be built into the system to scale these instances dynamically to minimize resource waste.

The main things to implement for containerized microservices are virtualized links and calls.

Call virtualization should be carried out through a service registrar function -- which is a gateway into a service registry in microservices. Service discovery -- how these independent and movable components connect -- is core to how a microservices architecture works. The service registry takes responsibility to manage which services are registered, where they are located and whether they are available -- and to deregister services when they are no longer valid.

Via a service registry, containerized microservices use a service discovery function for two purposes: to identify where other services are, and to register and make themselves visible to other services. This registry function used to be a primarily manual process where the software team coded the registration into the container. However, popular technologies for container management enable automated registration.

Azure has its own public service registry, the Azure Container Registry. It also has the fully managed Azure Kubernetes Service, which automatically registers microservices held within a suitably configured container. Google also has a free service, Container Registry on Google Cloud. AWS has a range of container registries across its offerings. Many of these also act as container orchestrators.

All of these registries typically require that the developers include only a couple of lines of code in the container to provide a full registration of the service when it spins up on the hosting environment.

Most container orchestration tools are also ready to provide service discovery. IT organizations that want to manage how distributed apps are built, provisioned and managed in an operations environment should seek a suitable tool.

Kubernetes, Docker Swarm from Mirantis and Docker Compose are three popular technologies in this space. Red Hat OpenShift is an open source container orchestrator built around Kubernetes, and there are many other supported, managed versions of Kubernetes available on the market. HashiCorp Nomad is another well-featured cluster and application scheduler. Nomad can work alongside other products from HashiCorp to provide a fully managed DevOps environment.

What to look for in a tool

A proper orchestration tool for containerized microservices provisioning and management must have a service registry to automate discoverability. It must also maintain a real-time register of which services are available and which are not, due to planned or unplanned events. Developers can access the service discovery capability when building a composite app, as well as check before they build new microservices to ensure they don't replicate previous work.

Along with managing stateless provisioning and microservices across a hybrid platform of infrastructure options, orchestration tools provide a proven method for IT organizations to deploy microservices updates and patches -- which further avoids use of manual methods and error-prone manual scripts.

IT organizations that want to make the most of microservices must adopt cloud and containerization services -- they are the only way forward. However, to optimize this approach, service registries are required, along with automated orchestration tools.

Next Steps

How to install and deploy Quay on Red Hat OpenShift

Dig Deeper on Containers and virtualization