Tip

Container orchestration tools ease distributed system complexity

Containers simplify some aspects of enterprise apps, but the deployment process is no walk in the park. Orchestration tools bring standardization and automation to complex hosting scenarios.

Far more than half the businesses that use containers do so with Docker technology. But that's only half the story.

The majority of containers actually deployed for production workloads rely on more than just Docker. Container deployment isn't simple, no matter what tools you use. There is an inherent complexity in the process to create containers and use them to deploy applications. This difficulty is compounded when there are a lot of applications involved or when containers are hosted on both cloud and on-premises infrastructure, in a cloud-bursting or failover scenario. Large data centers are more complicated than small ones.

IT teams that use basic Docker containerization must resolve complicated problems manually, which eats up time and can errors.

Kubernetes-based container orchestration tools

Automated container orchestration tools deploy and redeploy apps and handle failures. They're the next step for organizations that require more than command line-based container management via Docker.

Kubernetes is the most popular container orchestration technology. More containers deploy via a combination of Docker and Kubernetes than do via Docker alone. This is due in part to the fact that Docker-only container users typically run fewer applications than their orchestrating brethren.

Kubernetes organizes the relationship between applications and resources. The user defines clusters of resources available to host applications. Kubernetes simplifies assignment of containers to hosts, as well as how updated components of complex applications get replaced. It also enhances DevOps processes because it replaces common manual tasks in the IT environment with policy-driven automation and standardizes the way that components and applications integrate into complex workflows.

Cloud benefits of container orchestration

Kubernetes is a fixture in large container deployments on private infrastructure, but it shines when combined with the cloud. A user can divide applications between the data center and public cloud and also between public cloud providers in multi-cloud deployments by properly defining Kubernetes clusters. Public cloud providers support Kubernetes within their cloud offerings, as a web service:

  • Amazon Elastic Container Service for Kubernetes, called EKS;
  • Google Kubernetes Engine, called GKE;
  • Microsoft Azure Container Service, called AKS;
  • IBM Cloud Container Service; and others.
IT teams that use basic Docker containerization must resolve complicated problems manually, which eat up time and can errors.

The broad support for Kubernetes as a Docker-plus-container orchestration tool creates its own confusion, particularly with hybrid and multi-cloud. Should the IT team in charge of the application also set up and manage Kubernetes or use the cloud provider's version of the technology? Generally, large enterprises and teams that change cloud providers often prefer to manage Kubernetes in-house. These organizations can still use public cloud providers in their container strategy but should host Docker and Kubernetes on infrastructure-as-a-service VMs rather than adopt the cloud provider's managed Kubernetes service. The organization can use Kubernetes' capability for resource cluster management and integration to tie things together.

Kubernetes as a service is a good choice if most container deployment occurs in the public cloud or if there is a clear public/private boundary. For example, if front-end components, which are device- and GUI-centric, run in the cloud and back-end applications in the data center, the managed Kubernetes service of the cloud provider of choice will work well for the organization, because they'll orchestrate the two pieces of the applications independently. These application components should not fail over or burst between the public and private resource pools.

Container orchestration options

Docker container orchestration tool choices don't end with Kubernetes. Some container users create highly complex and dynamic applications with components that float among cloud providers and into and out of the data center. A second level of orchestration creates a universal virtual resource pool that's independent of the hosting provider or server technology. Under this model, the hosting resources all look the same and are therefore easier to manage. They should be evaluated for cloud bursting, failover and event processing.

The Apache Mesos project and the commercialized Mesosphere DC/OS tool achieve this kind of container orchestration. Mesos and DC/OS create, essentially, a fully distributed OS kernel that spans every cloud and on-premises system on which it's run.

Don't be insecure

The most publicized reason to select containerization tools beyond Docker is security, but Docker security has improved considerably as the platform matured over the course of 2017. Unless you have exceptionally stringent security and compliance requirements for container deployment, Docker should fit the project. Otherwise, evaluate CoreOS Rkt as the fundamental container software for the deployment. Orchestration tools used for Docker will also work with Rkt.

Mesos and DC/OS can be paired with a kind of super-orchestration tool, Marathon, to create sophisticated container deployment and operations. Marathon features target high availability. A user can set deployment policies that limit where containers are hosted in order to meet security and compliance goals. It also includes APIs so orchestration processes can integrate with load balancers, management systems and other tools.

Despite the various advanced container orchestration tools available, not all deployments need to go beyond the Docker platform. The majority of cases that require orchestration are addressed by Kubernetes. As interest in containers and real production-level deployments grow, the orchestration demands of users will evolve as well.

Next Steps

How to tackle container orchestration challenges

Dig Deeper on Containers and virtualization