AndreasG - Fotolia

How to craft a future-proof enterprise container strategy

There are many ways to plan an IT container management strategy. Follow these steps to map out an enterprise's needs for container software and tools.

There are probably a hundred or more combinations of software that organizations can use to deploy and manage application containers. The bad news is that this can make for a long and complex decision-making process. The good news is that the core of this growing toolkit is settling on a single technical choice, with multiple packaging options from multiple sources.

Keep in mind the formula for a container management strategy -- it's the combination of container software plus container orchestration. There's a market baseline that represents what most organizations choose for an enterprise container strategy, both for basic software and orchestration. Beyond that baseline are special container requirements and user constraints that could drive businesses to different purchasing decisions.

The graphic below introduces important requirements, and points to the container management approach that's likely best suited for those needs.

Simple container hosting

The container deployment baseline is the simple hosting capability of containerized software itself, and that basic capability might be all an organization needs. If the enterprise's container strategy focuses on private deployment in a single data center and with no more than a dozen off-the-shelf applications to deploy, then a basic container hosting software package, such as Docker, will suffice. But that's becoming increasingly unlikely, as containers become more mainstream.

Simple container hosting has already hit a wall in adoption. According to CIMI Corp. business surveys, over two-thirds of all container users fell into this "simple tool" category in 2017, but by early 2019, less than one fifth considered themselves candidates for simple hosting even as an on-ramp to container use. Of these, more than 80% were SMBs. Nearly every container user or prospect should now consider a true orchestration strategy.

Container decision flowchart
Determine the container management system that fits your organization's needs.

For that strategy, Kubernetes is the clear winner. Kubernetes is to containers today what Docker was only two years ago -- the essential foundation. The question for users is less about whether they should consider Kubernetes container orchestration as their model for container deployment, and more about which specific package of capabilities they should add to it. Kubernetes has many hooks to add features and capabilities, and every Kubernetes distribution has a different perspective on what tools and capabilities are best.

Add more tools to a container management strategy

Most organizations today can't meet their needs with basic container software alone, and in the near future, few outside of SMBs will even have that option. Even a basic Kubernetes package is unlikely to meet the requirements of most users. The container tools an organization needs depends on three factors:

  • The nature of the company's hosting commitments and plans. Some companies may plan to host everything on public cloud. Most companies will deploy their key container applications in their data center, but will also use some public cloud resources.
  • The probability that the company will use multiple cloud providers, or at least keep cloud provider options open. The larger the business, the more likely it is a multi-cloud prospect.
  • The company's plans for a truly elastic microservices deployment of, or based on, containers. This use of containers must be inherently hosting-agnostic, ready for the data center and any number of cloud partners, and ready to scale and failover among all the hosting options. Again, larger businesses are likely to evolve to this position over time, but some organizations will make the change sooner than others.

The first path on the flowchart above represents businesses that depend on the public cloud for all their hosting, and have PCs and specialized small servers, but no data center in house. These companies should plan to consume managed Kubernetes services from their cloud provider.

Most companies will likely follow the second path in the flowchart, where the data center remains the key application hosting point. This is both consistent with current practices and addresses governance and security concerns of senior management. For simple data center deployments with no plans to incorporate public cloud features or hosting, basic container software alone is adequate. However, for most companies, some cloud-hosted applications or application elements are a given, based on current industry trends. Orchestration is essential for larger data centers or hybrid or multi-cloud deployments. That's why Kubernetes has become the center of container planning.

It's smart to think of containers in terms of a Kubernetes ecosystem, a collection of tools supported by a single provider, for any company other than one with significant internal development and opensource software experience.

The extent and nature of a company's hybrid and multi-cloud deployments drives the next decision: whether to accept a basic Kubernetes offering, or one with significant capabilities to manage cluster hosting across cloud and data center boundaries. Almost any Kubernetes package serves a simple hybrid cloud, but organizations need additional federation features to support clusters in multiple clouds with a common policy set for multi-cloud deployments -- particularly if an application's components move among the cloud providers for scaling or resilience.

The most complex of all container deployments involve applications made up of shared, scalable, resilient microservices that organizations deploy, as needed, across a range of data centers and public clouds. These types of applications blur the boundaries between container hosting and serverless computing, and many users already believe that container-based applications are a stepping stone to serverless deployments. It's complex to move work among a totally dynamic set of components, so these applications typically rely on additional software -- service mesh technology. Service meshes aren't a part of container management, but they've become more important in the Kubernetes ecosystem.

An alternative to service mesh for very large and dynamic applications is resource abstraction. This simplifies application deployment and redeployment by making many different resource types look the same. This process used to involve a separate tool layer, such as Apache Mesos, but now it's a property of the chosen orchestration tool, Kubernetes, and its federation and ecosystem elements.

Containers in public cloud

Other options outside the main Docker/Kubernetes container management strategy might be best suited for organizations that run containerized applications in the public cloud. The four primary IaaS public cloud providers -- Amazon, Microsoft, Google and IBM -- all offer container tools, and it's smart to consider these first.

Organizations that have an application in containers on public cloud should also consider whether to deploy their own container software on VMs hosted by the public IaaS provider. The advantage of a container-on-IaaS approach is that it's easier to run and manage these applications alongside a company's own data center applications as part of a hybrid cloud. In fact, IT and development teams can use the same container tools in both places.

The disadvantage to using a private container toolkit in the public cloud is that it may be more difficult to integrate applications with the cloud provider's web services. If a company anticipates that its application will rely on cloud services, it must coordinate access to those features when it uses separate container tools to deploy and redeploy in the cloud. At a minimum, always evaluate what a current or prospective public cloud provider offers in the way of container tools and how easily they can work in company data center container plans, if necessary.

Generally, public cloud providers' container support focuses on orchestration more than basic software for container hosting. All of the public cloud providers support the Kubernetes orchestration tool, and all host Docker containers. So, a company that has a strong commitment to public cloud may be wise to select compatible tools for the data center. Organizations must anticipate how their use of the public cloud will evolve. It's possible to host and orchestrate container deployments differently in and out of the public cloud, but this requires a higher level of technical skill.

Container orchestration can be complex, and requires a strong technical staff to manage operations. Businesses that lack considerable technical expertise in their IT operations group should adopt either a simple container strategy or a cloud-provider-managed Kubernetes strategy that focuses on a product that's easy to use and sustain in production. If the simple approach doesn't fit application requirements, augment in-house technical skills to ensure applications run properly and provide the desired user experience. Many third-party software providers also offer custom integration services, if that is needed or if a company uses different container tools from multiple providers.

The most popular container and orchestration software is available from third parties, bundled and customized, with support included. These bundles range from a repackaged single tool to customized features and complete container toolkits that have everything a business may need. Even if these packages have a few extra items that the organization doesn't need today, they might be the best option to prepare for expanded container use in the future. For any company without significant internal development and open source software experience, think of containers in terms of a Kubernetes ecosystem -- a collection of tools supported by a single provider.

Next Steps

How to tackle container orchestration challenges

Dig Deeper on Containers and virtualization