your123 - stock.adobe.com
New funding for Anyscale boosts popular Ray AI platform
The startup raised $99 million and released a new version of its open source framework, which helps enterprises scale AI workloads and projects with available resources.
A new funding round added more momentum for fast-growing AI startup Anyscale, founded by the creators of Ray, an open source framework for organizations looking to scale AI workloads.
Along with revealing $99 million in a new series C extension funding round -- led by existing investors Addition, Intel Capital and Foundation Capital -- the vendor introduced Ray 2.0 at the Ray Summit in San Francisco on August 23.
The new funding comes after Anyscale secured $100 million in a series C round in December.
Ray is used by organizations around the world, including Uber, Meta and Instacart, to grow their AI workloads.
Ray 2.0 capabilities include Ray AI Runtime, which adds a runtime layer for machine learning applications and services; KubeRay, a collaboration between Anyscale, Microsoft and ByteDance for managing Ray clusters on Kubernetes; and integratable machine learning tools developers can use to combine Ray with other machine learning libraries, data platforms and MLOps platforms.
An abstracted layer
Anyscale is trying to create an abstracted layer that can use distributed resources and data to execute a specific function, said Jim McGregor, an analyst at Tirias Research.
This approach enables enterprises to run their AI projects on any system without making it program specific, he said. Instead of programming projects to a specific GPU or CPUs, the project is programmed first; the platform then decides on the best server or program.
Jim McGregorAnalyst, Tirias Research
Anyscale's approach is attractive to enterprises and investors for several reasons, McGregor said.
One is that most AI inference workloads still operate on CPUs. For many enterprises, it's expensive to operate their workload on GPUs even though they are more efficient, he said.
"They're trying to make it easier as AI and some of these other tasks get more and more compute and data intensive to be able to [run AI workloads] at an abstracted level and use all the resources that are out here," he said.
Anyscale is also appealing since the startup works with big tech companies such as Microsoft and has big investors like Intel, he said.
But while Anyscale is one of the more successful early vendors trying to help enterprises scale their AI workloads from the computer to the cloud using the resources on hand, the vendor is not the only player in this market, McGregor added. The tech giants are also working on helping enterprises address the problem of scaling large AI workloads.
"It's still early in the market even for AI," he said. "We're still developing new models; we're still developing new ways to train. The market's open."
A Ray user's view
Cohere is a natural language processing vendor that started using Ray in 2021. Using the platform, an IT team at Cohere wrote a new framework for distributed training of large language models.
Cohere turned to Ray because of how easy it was for different team members to understand how to use Ray to write the new framework, regardless of their skill level, said Siddhartha Kamalakara, machine learning engineer at Cohere.
"Ease of use is a big thing for us," he said. He added that using an open source tool like Ray was also helpful because the team could dive into Ray's source code to figure out problems that came up while trying to write the framework.
At the conference, Anyscale also introduced its new Enterprise-Ready Platform, which gives IT and security teams cluster connectivity and customer-managed virtual private clouds. The vendor also released its Ray platform for enterprises, which lets organizations build AI and Python applications using Ray.