Getty Images/iStockphoto
Latest version of Vdura Data Platform aimed at enterprise AI
Vdura Data Platform version 11 focuses on a microservices-based architecture and composable hardware for performance and capacity, which includes utilizing HDDs.
Software-defined parallel file vendor Vdura is bringing higher performance and reliability to both AI and high-performance computing workloads with the latest version of its data platform.
Vdura Data Platform version 11 will deliver 11 times the performance of version 10 and 11 nines of durability, according to the company. The new version introduces a microservices-based architecture, giving Vdura, formerly Panasas, more flexibility in how it deploys its software across nodes to address HPC and AI workloads.
Vdura also certified hardware platforms that consist of all-flash Director Nodes, storage nodes that contain SSDs only for metadata, and Hybrid Storage Nodes that use the Director Node as well as HDDs to bring more cost-effective capacity to AI and HPC workloads compared with running all-flash storage.
As AI workloads come into the enterprise, storage needs to adapt to meet changing demands, according to Mike Matchett, an analyst at Small World Big Data. To address new challenges such as neural net training, vector analysis and vector embeddings, enterprises need the equivalent of a small supercomputing cluster and storage to match.
"Enterprises have to adopt parallel file systems and high-performance class storage, and it's not something they can easily do," Matchett said.
This would put them in the same competitive landscape as IBM Spectrum Scale, DDN Infinia and Vast Data, he said.
VeLO, VPOD and durability
The microservices-based architecture is a critical update to the Vdura platform, according to CEO Ken Claffey. It includes a new key value store -- Velocity Layer Operations (VeLO), which is Vdura's IOPS flash metadata engine where each instance can have up to 2 million IOPS. Customers can add as many instances as they need to a single namespace.
Microservices also enabled Vdura to create Virtualized Protected Object Device (VPOD) storage units, which sit on the HDD layer. Vdura offers erasure coding across the cluster as well as additional erasure coding within each VPOD for better durability.
The modular design and added durability are important for both AI and HPC workloads, according to William McKnight, president of McKnight Consulting Group. Vdura is now architected so that if one node goes down, performance drops only about 1%.
"If a node goes down, it's a big deal," he said. "They've designed it very modularly to keep, not only durability up but to keep performance up."
HDDs and AI
The new microservices architecture also allows Vdura to be hardware agnostic and composable, Claffey said. To this end, Vdura introduced certified hardware to optimize its capabilities.
For VeLO instances, Vdura uses 1U Director Nodes with AMD EPYC 9005 CPUs, Nvidia ConnectX-7 network interface cards, Broadcom 200Gb Ethernet and Phison PCIe NVMe SSDs. If a customer needs more performance, more Director Nodes can be added.
Mike MatchettAnalyst, Small World Big Data
For VPOD instances, Vdura is using HDD-based JBODs for higher capacity at lower costs compared with all-flash.
AI and HPC do have high-performance requirements, but with flash prices being up 50% year-over-year, costs can drive architectural decisions, McKnight said.
"They're targeting price performance … which is an important metric for system selection," he said.
While costs are important, performance needs can still be met with a combination of flash and spinning disks, Matchett said.
"If you can build a cluster, get tons of spindles working in your favor and manage the availability … you're looking for total IOPS," he said. "If hard drives are still cheaper per spindle to build up to total IOPS, that's still going to be where people are going."
Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware and private clouds. He previously worked at StorageReview.