Solarflare Cloud Onload aims to improve server efficiency

Solarflare Cloud Onload, a new application acceleration platform, was designed to improve data center efficiency and remove waste from today's cloud infrastructure investments.

Solarflare has announced Cloud Onload, an application acceleration platform designed to allow data centers to reclaim 25% or more in server resources.

Solarflare Cloud Onload, the first in a planned family of application acceleration products, works by improving server efficiency and performance, according to the vendor. Cloud Onload was created with the goal of removing billions of dollars of waste from today's private and public cloud infrastructure investments. Solarflare's hope is that customers can convert wasted servers into services, leading to increased revenue and lower costs.

Solarflare Cloud Onload accelerates and scales a data center's network-intensive in-memory databases, software load balancers and web servers, so one server can do the work of four. Additionally, the vendor claimed there will be improved reliability, enhanced quality of service and higher ROI. The platform reduces latency, while increasing transaction rates for nearly all Transmission Control Protocol applications on both physical servers and virtualized environments.

According to Solarflare, benchmark testing has two times the improvement in in-memory databases, including Couchbase, Memcached and Redis; 10 times improvement in software load balancers, including NGINX and HAProxy; and 50% improvement in web servers and applications, such as NGINX and Node.js.

Cloud Onload uses Solarflare's kernel bypass technology. By operating in user space instead of kernel space, Cloud Onload reduces CPU interruptions, eliminates context switches and minimizes memory copies on all network data, according to Solarflare.

Data centers can also use the platform to achieve Remote Direct Memory Access-like performance without any modifications to existing software, according to the vendor. Solarflare Cloud Onload is designed to run in any Linux environment, whether open source, bare-metal, virtual machine or container.

Other claims by Solarflare about Cloud Onload include adding 400% or more users, reducing latency by 50%, improving message rates by 100%, minimizing server footprint by 25% or more, maximizing Capex and Opex, and an enhanced quality of service.

Solarflare Cloud Onload is available now, and it runs on 10/25/40/50/100 Gigabit Ethernet networks.

Dig Deeper on Data center ops, monitoring and management