your123 - stock.adobe.com
Oracle launches HeatWave GenAI to fuel AI development
The tech giant updated its database with new features aimed at simplifying model and application development cost-effectively, including in-database LLMs.
Oracle on Wednesday launched HeatWave GenAI, a version of the tech giant's HeatWave database designed to enable enterprises to easily combine their proprietary data with generative AI capabilities cost-effectively.
Now generally available at no extra cost to existing HeatWave customers, HeatWave GenAI includes in-database large language models (LLMs), automated in-database vector storage, scalable vector search and HeatWave Chat to enable users to work with data using natural language rather than code.
Collectively, Oracle's new capabilities add up to an important addition given the a growing interest among enterprises in developing generative AI models and applications, according to Holger Mueller, an analyst at Constellation Research.
"AI is critical for all aspects of the software stack, including databases," he said. "With the [growing] adoption of AI … customers of HeatWave get a new lease on life as that will allow them to add AI to their applications and thus futureproof them. It's not [just] an incremental update. It's more evolutionary."
Based in Austin, Texas, Oracle offers a variety of databases among its data management capabilities. HeatWave is a MySQL database that enables users to query and analyze data without having to extract, transform and load data into another environment.
Competing data storage platforms include Amazon Redshift, Databricks, Google BigQuery, Snowflake and Teradata.
In July 2023, Oracle extended its HeatWave suite to include a data lakehouse.
A wave of GenAI
Generative AI development has exploded in the 19 months since OpenAI launched ChatGPT, which represented a significant improvement in large language model capabilities.
LLMs, which have vocabularies as large as any dictionary and can infer intent, enable true natural language interactions. When trained with an enterprise's proprietary data, they enable true natural language interactions with that data, allowing users to model, query and analyze data without having to write code.
As a result, for the first time, the potential exists for more people within organizations than just a small percentage of data experts to use analytics to inform their work.
Many enterprises, therefore, are developing generative AI models and applications to make data-informed decision-making more widespread. Developing those models and applications, however, is complicated.
It takes massive amounts of high-quality data, both structured and unstructured, to train generative AI to be accurate. That requires deployment of such technologies as vector embedding to give structure to unstructured data, vector search and retrieval-augmented generation to discover all the data relevant to a given model or application, and developing secure and governed pipelines to safely move the data into models and applications.
HeatWave GenAI aims to simplify the training of generative AI models and applications, according to Nipun Agarwal, Oracle's senior vice president of MySQL Database and HeatWave. In addition, the new capabilities are designed to enable model training in such a way that it reduces the compute power previously required, which results in cost efficiency.
"It makes using LLMs much easier," Agarwal said.
Perhaps the most significant new capabilities are in-database vector storage and LLMs native within HeatWave itself rather than external systems with which the database has an integration, according to Mueller.
Holger MuellerAnalyst, Constellation Research
Oracle's inclusion of in-database LLMs, including models from Cohere, Meta and Mistral AI, is designed to reduce the complexity and cost of developing generative AI tools.
With LLMs already in their database, users can perform the tasks needed to develop models and applications without having to export data into a potentially unsecure environment or import potentially unsecure LLMs into their data environment. In addition, because no exporting or importing is needed, there is no cost that is normally associated with exporting large amounts of data or importing voluminous LLMs.
In-database vector storage is an automated feature that similarly saves users from having to move data -- in this case, to a specialized vector database -- while ensuring that relevant data is selected for model and application training.
"The native vector capability in the same database makes processing and creating vectors more efficient," Mueller said. "[So does] the native LLM in the database. Latency matters, and it is the right and elegant solution to run the LLM right where the data is."
Beyond making generative AI development more efficient, Oracle's new in-database capabilities differentiates HeatWave from databases provided by other vendors, according to Stephen Catanzano, an analyst at TechTarget's Enterprise Strategy Group.
Just as Oracle users previously had to use external AI services and specialized databases, customers of other database vendors still must use those external tools to train generative AI models and applications.
"Oracle's approach with HeatWave GenAI distinguishes itself by embedding LLMs and vector processing directly into the database, offering a unified platform for AI and data operations," Catanzano said. "This integration contrasts with approaches from other database vendors, thereby enhancing Oracle's appeal for GenAI development."
Beyond the in-database capabilities, HeatWave GenAI includes scalable vector search and an AI assistant that enables customers to use natural language rather than code when developing generative AI tools.
Scalable vector search is aimed at improving the speed of vector searches, which reduces the cost of such searches given that clouds often employ usage-based pricing.
The improved search speed, which is based on HeatWave's scale-out architecture that enables parallel processing across up to 512 nodes at the same time, makes vector searches significantly faster in HeatWave than in Databricks, Google BigQuery and Snowflake, Oracle claimed.
HeatWave Chat, meanwhile, is an interface within the database that enables developers to ask questions in natural language and receive responses, including citations of the sources used to inform the responses so they can be verified.
Collectively, the new HeatWave GenAI tools are a significant advancement for AI developers using Oracle, according to Catanzano.
"The release adds substantial value rather than being merely incremental," he said. "It introduces foundational capabilities … that fundamentally enhance database functionality with built-in AI capabilities that streamline operations, boost performance and lower costs significantly."
Following the launch of MySQL HeatWave Lakehouse in July 2023, Oracle collected feedback from customers regarding what more they wanted as part of their HeatWave environment. That feedback from customers resulted in the new features aimed at enabling simple, cost-effective generative AI development, according to Agarwal.
"When we talked to customers, many of them were asking to have the power of LLMs for their enterprise content," he said.
Developers were able to build generative AI models and applications with existing Oracle capabilities, but it wasn't easy or inexpensive.
Akin to a new smartphone, basic capabilities were immediately available. But for complex development -- just as a smartphone user needs to add apps beyond what comes with their phone -- Oracle users needed to pull in capabilities to build generative AI tools.
In particular, they needed graphics processing units (GPUs) to provide enough compute power for generative AI models and applications. GPUs, however, are both scarce and expensive, Agarwal noted.
"Customers asked us to do something that made [GPUs] more available to them," he said. "We heard about these constraints, and then we innovated upon them."
Once Oracle collected feedback from its customers, it took nearly a year to ready the new generative AI-oriented capabilities for general availability, according to Agarwal.
Next steps
Now, just as Oracle spoke with customers in 2023 to discover what more they wanted in HeatWave, the tech giant plans to again collect customer feedback to shape the next year's roadmap, according to Agarwal.
With HeatWave GenAI now in the hands of customers, Oracle plans to observe what customers create, listen to what more they need and then work to provide it.
"What we are excited to see is what customers build and what they would like us to do next," Agarwal said.
Mueller said that with Oracle's addition of new features designed to simplify generative AI development, there are no obvious capabilities HeatWave is lacking. As a result, the tech giant would be wise to focus further development on continuing to improve the database's existing features.
"They have run out of general capabilities that are needed," Mueller said. "I expect depth, scale and maturity to be the focus in upcoming releases."
Catanzano similarly noted that Oracle is meeting customer needs with its product development but suggested it could provide users with more choices when deciding which LLM to use with their data.
"They are making great progress on filling in the gaps for developers looking to create GenAI applications," he said. "More integrations with LLMs and tools will continue to help."
Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.