Cloud computing network primer
Cloud computing network architecture is moving toward a flat, leaf-spine topology, enabling reduced latency and balancing bandwidth, providing any-to-any connectivity within and between servers. Learn more in this primer.
What is a cloud computing network?
The definition of cloud computing changes readily, so the definition of cloud computing networks changes as well. Ultimately, the goal of cloud computing – regardless of model – is to create a fluid pool of resources across servers and data centers that enable users to access stored data and applications on an as-needed basis. Cloud computing networks, therefore, have two missions: to enable the movement of that pool as a single virtual resource, and to connect users to these resources regardless of location.
To make that happen, cloud computing networks -- whether they support public, private or hybrid clouds – must be able to:
- Burst up and turn down bandwidth on demand.
- Provide extremely low latency throughput among storage networks, the data center and the LAN.
- Allow for non-blocked connections between servers to enable automated movement of virtual machines (VMs).
- Function within a management plane that stretches across enterprise and service provider networks.
- Provide visibility despite this constantly changing environment.
Cloud computing networks can be seen as three interdependent structures: the front-end, which connects users to applications; a horizontal aspect, which interconnects physical servers and the movement of their VMs; and storage networks. The larger cloud network can be built as either a layer 2 or a layer 3 network.
Cloud computing networks support public, private and hybrid clouds
Enterprises can use cloud computing in a number of ways, each posing a unique set of challenges to networking teams. At this point, many users rely on a public cloud model in which they access hosted applications and data through a service provider that makes resources available over the public Internet on a pay-per-use or subscription basis.
This is probably the simplest approach for enterprise networking teams, who basically work to enable and secure connection to the public Internet. Some companies use a virtual private network (VPN) or dedicated lines to access these hosted applications and storage, taking a little more work on the part of the networking team. One concern there is avoiding bottlenecking at the main link. For example, if branch Internet traffic is backhauled through corporate headquarters, placing cloud access on that same connection could congest the network and constrain key applications.
Generally, though, cloud computing is more of a challenge for companies moving to private or hybrid cloud models. Larger firms tend to build internal or private clouds in which they turn their own data center or portion of a collocation center into a hosted environment.
Others are moving toward a hybrid cloud model in which they provide and manage some resources in private data centers and some in the public cloud. This enables users to place their most security-sensitive applications and data in a private facility while depending on a public cloud for less sensitive applications.
In both scenarios, enterprise networking teams must build internal cloud computing networks, which involve seamlessly linking storage and servers to the LAN. In a hybrid model, things get even more complicated because networking teams must inter-network public and private clouds as well.
Cloud computing network architecture: Gigabit Ethernet switching fabrics
Cloud computing networks are moving away from the typical three-layer switching topology in which access switches are connected to a large pool of aggregation or distribution switches that are then connected to the core. The concern with the traditional model is latency -- it forces packets to stop at hops at every layer and doesn't provide any-to-any communication between the hundreds of servers and migrating VMs necessary in a cloud environment.
It is expected that, over time, enterprises will build cloud networks with a distribution layer of 10 GbE switches (and ultimately 40 GbE and 100 GbE once these standards are approved) that is flattened or broadened out, becoming the communication link between servers with as few blocks as possible. This middle layer of Ethernet switches will be built as a larger fabric so that enterprises can manage them as one or a couple of large switches.
Leaf-spine or fat-tree switching topology for cloud computing network architecture
Some describe this two-layer switching method as a leaf-spine switching topology or, similarly, a fat-tree switching topology. In this scenario, servers are connected to leaf switches, which are then connected to a broad web of spine switches that provide interconnected bandwidth between leafs and spines. That fabric of switches is ultimately connected to the larger Internet through the core router.
This fabric of switches, which include as many ports as possible, allows equal bandwidth access to every connection, enabling non-blocked movement of VMs in a visible, any-to-any server environment. By providing enough memory within each switch, the network can forward packets through the fabric more easily. Cloud leaf fabric controls the flow of traffic between servers, while the spine switching fabric moves traffic between nodes bi-directionally.
Storage in cloud computing networks
Storage plays a major role in cloud computing networks. Applications and data must flow seamlessly among storage systems, the data center and the LAN. Storage arrays, whether Fibre Channel or iSCSI based, must be connected to data center servers. Servers are generally equipped with Fibre Channel or Ethernet adapters, which connect them to a Fibre Channel or Ethernet switch. That switch provides connectivity to storage arrays.
Many vendors are pressing for the conversion to Fibre Channel over Ethernet (FCoE) components, which enable the transport of Fibre Channel traffic over Ethernet. Currently, Ethernet is used for server-to-server and server-to-client connectivity, while Fibre Channel provides server-to-storage connectivity. FCoE encapsulates Fibre Channel traffic in an Ethernet frame, eliminating the need for Fibre Channel adapters in servers and enabling enterprises to manage all of the traffic on one network through one system. Increasingly, manufacturers are introducing FCoE-ready storage and switching components to enable this convergence.
Virtual switching crucial to cloud computing networks
In addition to Gigabit Ethernet switches and network interface cards (NICs), network architects and administrators must work with virtual switches and virtual Ethernet adapters, which handle switching between VMs and sometimes between physical servers. VMs connect to the physical network by linking to virtual Ethernet adapters and virtual switches, which then connect to physical Ethernet adapters and switches. Virtual switches not only manage the movement of VMs, they also aid in the management of virtual LANs (VLANs) in the data center, which network architects create to help segment, secure and manage large groups of servers and their flexible VMs.
Every virtualization software product – or hypervisor – comes with a virtual switch that manages the traffic of network frames within the physical host server. For the most part, these virtual switches have been invisible to network administrators' tools, leaving virtual switching entirely in the hands of systems administrators and causing nightmares for networking teams. Now, switch vendors are introducing their own virtual switch products, such as the Cisco Nexus 1000v and HP Virtual Connect, which either replace or cooperate with hypervisors' embedded virtual switches, giving network managers and administrators control over the movement of VMs inside and between blade servers.
As with many functions of cloud computing, virtualization switch standards are still emerging for virtual switches, making implementation decisions difficult. Cisco and HP ProCurve have brought competing standards to the IEEE's 802.1 Working Group this year. Both are aimed at giving network administrators more visibility into the movement of VMs and servers. Cisco's original proposal, VN-Tagging, also known as port extension, calls for some changes to frame formats, which in turn would require hardware upgrades. ProCurve's proposal, Virtual Ethernet Port Aggregation (VEPA), requires only a software upgrade to most existing switches. The companies have created a joint proposal that includes both VEPA and VN-Tagging, dubbed 802.1qbg and 802.1gbh, respectively. Both are on the verge of being accepted as new standards projects by the IEEE.
Network automation necessary for cloud computing networks
Very little is static in a cloud environment. Instances of servers and networks are provisioned at the drop of a dime. Automated network processes are essential to provisioning these virtual resources. This means that the network architect must seek out automation tools that handle IP address management, configuration management and resource allocation. While these tools exist, they are constantly evolving, and networking teams may find themselves assembling a patchwork of solutions in addition to the offerings of their networking or data center portfolio vendors.