Fotolia

Tip

SDDC architecture basics and deployment considerations

Software-defined data centers accelerate hardware provisioning and maintenance. These benefits are reason enough for admins to familiarize themselves with SDDC components.

A software-defined data center provides an IT experience similar to a public cloud. Admins select services and assemble them into applications using automation software and a configuration stored in a version control repository. The software-defined part frees users from thinking about the underlying hardware.

The software-defined data center (SDDC) platform abstracts physical resources the same way that a public cloud platform abstracts the physical data center. At the lowest level, an SDDC architecture needs an automated way to provision networks, store data and compute. A more mature SDDC includes the ability to provision application services, such as load balancers, databases and web servers, from the same automation.

Many organizations are moving to add containers and function as a service (FaaS) to their IT offerings. Most SDDC deployments are a layered set of technologies that share a common software-defined control plane.

User-focused services

A crucial part of the SDDC is that it delivers services targeted at its users -- rather than the data center itself -- around the service technology. For example, a software developer who wants a database for his application does not want to manage, or even know about, the Linux server that runs the database engine. The SDDC's services should align with usage needs of the developers and application teams.

The IT operations team should abstract and manage all infrastructure layers, instead of the SDDC users; they use automation and application programming interfaces (APIs) to provision the services they require from the SDDC's pool of resources.

SDDCs aim for the IT agility of public cloud. The various layers of the SDDC architecture must provide resources and APIs for provisioning and management, all of which are streamlined into a single API for SDDC users.

Configuration components for SDDCs

The first layer of SDDC architecture is server virtualization; a hypervisor with some automation for deploying VMs and applications inside those VMs support software-defined deployments. But for a complete SDDC, admins need more than VM templates that can be deployed in minutes. Whole applications that might span multiple VMs -- and maybe multiple data centers -- must be deployable as a unit.

It is essential to understand the limitations of an SDDC and its differences from a public cloud configuration.

Server virtualization sparked the emergence of shared storage, which must be software defined to facilitate the SDDC. The storage for an SDDC must be provisioned and managed using APIs. The ability to pool physical storage resources and provision them out using an API is the software-defining part, not the internal architecture of the storage system. A monolithic fibre channel array is just as capable of being software-defined as a scale-out cluster of commodity servers.

The waves of server and storage virtualization have prompted the rollout of software-defined networking (SDN) products. SDN products use an API to deploy a network configuration across an entire data center, including its wide area network. A command line on each switch or a configuration management tool that pushes consistent configuration files to multiple switches is not enough; there must be an API that controls the entire network.

Software-defined services

SDDCs originally became accessible through infrastructure as a service (IaaS) platforms, but that isn't always beneficial for all users. For instance, developers do not always want a VM; a VM is usually the last resort for their software. Application-level services such as databases, message queues and object storage provide a more productive developer environment. This means admins need to integrate SDDC services on top of IaaS, which can be done with some extra planning.

Often, SDDCs have an app store functionality, where application blueprints deliver these services. The SDDC should include a way to deploy container-based applications, such as a Kubernetes blueprint.

Another popular service is a FaaS component that is similar to AWS Lambda. These value-added services make an SDDC useful to developers and competitive with the public cloud.

APIs and the public cloud

Admins want a single API focused on the services they use. This API must drive the SDDC and then, in turn, drive all of the internal APIs for compute, storage, network and application services.

For ease of use, the external-facing API must also provide a web portal that deploys services via wizards until admins are ready for the full automation. Just as beginners in public cloud start by clicking on the web portals and move on to API-based tools, so should SDDC users.

It is essential to understand the limitations of an SDDC and its differences from a public cloud configuration. One of the first elements is initial scale. Big public clouds have multiple data centers in multiple regions and hundreds of thousands of servers. Most SDDC deployments have less than a thousand servers across all their locations.

The second element is long-term scaling; the big public cloud providers add capacity with thousands of servers at a time and often add tens of thousands of servers every month. An SDDC architecture doesn't add servers every month, and admins usually add servers during an annual cycle hardware purchase. The result is that while public cloud can accommodate organizations tripling their workload overnight, a SDDC is unlikely to support that growth.

The future of SDDC architecture

One challenging trend for an SDDC is the rise of graphics processing units (GPUs) and field-programming gate arrays (FPGAs) as part of the application mix. Traditionally, an SDDC delivers everything from a VM, and the physical servers are an abstracted commodity.

GPU and FPGA hardware are still not commodities, nor are they fully abstracted, so the fit into SDDC architecture is more difficult. Often physical servers with GPUs or FPGAs are on islands of uniqueness, surrounded by the sea of commodity virtualization hosts. Peripheral Component Interconnect Express fabrics are one way that GPUs and FPGAs can be separated from the physical servers; the fabric allows software to define which servers have which physical devices.

Hybrid cloud is another current reality; the SDDC is usually only part of the enterprise IT estate. Having the SDDC APIs available for workload deployment on the public cloud would be a huge enabler for a hybrid deployment. Another trend is the "intelligent edge," where applications are distributed beyond data centers, posing considerations for SDDC controlling the infrastructure outside the data center.

Dig Deeper on Data center ops, monitoring and management