Fotolia

Open Compute Project influence, open source hardware gain momentum

With open source hardware more feasible than ever, catch up on how Open Compute Project consortium uses open source to develop more efficient server and infrastructure designs.

Open source forms the foundation of many data center and enterprise applications. It has also become the preferred development methodology for cloud-native systems and networking software. From its origins at the fringes of the software business -- as a way for academics and hackers to share code -- open source has grown into the most comprehensive, scalable and versatile development model for enterprises, service providers, startups and researchers.

But what about open source hardware? There are several reasons to think open source hardware infrastructure's time has come.

The Open Compute Project aims to bring the benefits of the open source software model to hardware infrastructure. It has long had the support of data center giants, and products developed under its auspices are starting to touch every part of the data center.

Why open source hardware -- and why now?

Typically, the terms open source and software have been conflated since the ethereal quality of code makes it easier to share, duplicate and modify than hardware. However, the same sort of design sharing, component reuse and cooperative development has finally become feasible for hardware through several significant changes in both technology and the hardware supply chain, namely:

  • The increased commodification of hardware, making the critical components of a server, storage array or network switch widely available from multiple merchant providers.
  • The "codification" of hardware via standard design modules and development tools for use in building custom systems on a chip, field-programmable gate arrays, circuit boards or even hardware chassis.
  • An ample supply of silicon foundries and OEM contract manufacturers willing to build custom designs at the chip, board and system levels.
  • Increased automation of hardware design verification and component assembly that make low-volume manufacturing runs financially feasible.

Into this mix of techno-economic change, throw in significant shifts in hardware-buying patterns fueled by cloud computing as enterprises increasingly outsource infrastructure to cloud services. Driving to maximize efficiency and reduce cost, these technically sophisticated cloud operators find traditional enterprise hardware woefully inadequate for their needs. They are increasingly going directly to OEMs to build custom or white box (commodity) servers, switches and storage arrays designed for efficiency, density and scalability.

Cloud operators like AWS, Facebook, Google and Microsoft are in the business of selling services, however, not hardware. Furthermore, the commodification of technology means their data center hardware rarely provides any differentiated advantages, which has led an increasing number of cloud operators to see the benefits of an open source hardware development model to better meet their collective needs.

The benefits of an open source methodology are many

  • Increased efficiency through sharing code and best practices.
  • Heightened innovation from diverse viewpoints and increased collaboration.
  • Greater reliability and security through transparency and more thorough software testing.
  • Faster development times through code modularity and work sharing.
  • Lower cost, with most packages free to use.

Enter: The Open Compute Project

The Open Compute Project began about a decade ago as an internal Facebook engineering effort to design systems and facilities for a new 334,000-square-foot data center in Prineville, Oregon, that would be critical to the budding company's growth. Indeed, the site has since been expanded to include seven facilities totaling almost 3 million square feet.

As its vision and project structure illustrate, the Open Compute Project reaches into every corner of the data center.

Facebook's team began with a blank slate to design the most efficient servers and data center possible by using a 480-volt electrical system to minimize voltage-conversion losses and removing any system components that were wasteful and weren't critical to its function -- as either a compute engine or storage array. Unlike internal design efforts at most other companies, the Facebook team decided to emulate open source software and publish its design specifications for others to use and improve.

Rejecting conventional designs and rethinking existing assumptions led Facebook to develop vastly more-efficient systems that cut both capital equipment and operational costs. Its goal for the Open Compute Project was to create a like-minded community of data center operators, equipment manufacturers and users dedicated to developing the most cost- and energy-efficient computing infrastructure possible. Within months, Facebook hosted the first Open Compute Project Summit with 200 in attendance to see its first Open Compute Project server and storage system designs.

The idea of using open hardware and collaborative development to create better hardware and facilities platforms resonated with many cloud stakeholders, and the Open Compute Project took off. There have been global and regional summits every year since the 2011 launch. The 2019 Global Summit featured 178 member companies, 124 marketplace products and 3,600 attendees. The Open Compute Project leadership touts estimates from IHS Markit that predict 120% growth in Open Compute Project equipment sales to $10.7 billion by 2022.

server and storage system revealed at first Open Compute Project Summit
Server and storage system designs revealed at the inaugural Open Compute Project Summit in 2011.

Like other open source projects, the Open Compute Project has a governing body called the Open Compute Foundation that structures the efforts of individual and corporate contributors into agreed-upon projects and working groups that span nine categories:

  • Data center facilities
  • Hardware management
  • Networking
  • Open system firmware
  • Racks and power distribution
  • Security
  • Server hardware
  • Storage hardware
  • Telco-specific facilities and equipment

Commercial Open Compute Project hardware products

As its vision and project structure illustrate, the Open Compute Project reaches into every corner of the data center. A few efforts -- notably in server, storage and network switch hardware, system chassis/enclosures and rack specifications -- are far more mature than others, however. The following highlights some recent, notable Open Compute Project product developments.

OEMs such as Foxconn Electronics, Wiwynn and ZT Systems have released servers based on the Microsoft Project Olympus specification submitted to the Open Compute Project in 2017 that Microsoft uses to run much of the infrastructure powering Azure. These include 1U and 2U systems from Wiwynn, 3U AMD EPYC and Intel Xeon expansion chassis from ZT Systems and a 1U dual-socket ThunderX2 (ARM) system from Foxconn.

Microsoft's Project Olympus server
Microsoft's Project Olympus server powers much of Azure's infrastructure and is the basis for server products from several OEMs.

Wiwynn built a 32-bay storage server and 72-bay expansion chassis based on Facebook's Bryce Canyon National Park design. Inspur developed a 2U Goose Lake storage server designed for the larger 21-inch Open Compute Project rack spec.

Several top-of-rack switches based on the Facebook Wedge design are available from Edgecore Networks, including a 32x100 Gigabit Ethernet 1U model (since updated to 64 ports), and a 56x25 GbE device. Cumulus Networks offers a 32x100 GbE turnkey switch preloaded with its Linux network operating system.

Recently, many Open Compute Project contributors like Facebook have focused on modular designs with plugin accelerators for AI and other applications under a new Open Accelerator Infrastructure subproject. Another new project, OpenEdge, reflects the growing demand for edge computing infrastructure to offload cloud systems and 5G wireless backbones by moving processing close to network on-ramps.

Edgecore Wedge100BF-65X switch
Edgecore bases several of its switches, including the top-of-rack or spine Wedge100BF-65X switch, on Facebook's Open Compute Project Wedge Design.

The Open Compute Project is also working on a set of integrated offerings targeting particular industries, applications or use cases such as AI, demo systems and private cloud. The portfolio will undoubtedly expand as members come forth with new designs and vendors incorporate emerging technology from Open Accelerator and OpenEdge.

Over time, these integrated systems will proliferate as commercial products from OEMs and traditional converged and hyper-converged infrastructure vendors. They will certainly enhance variety and competition within the data center infrastructure market to the benefit of enterprises.

Dig Deeper on Converged infrastructure management