Kit Wai Chan - Fotolia
Confluent introduces ksqlDB event streaming database
Getting streaming data optimized to work inside of a database often involves disparate vendors and technologies, a challenge that Confluent aims to help fix.
Confluent is looking to make it easier for developers to use stream processing with a new event streaming database it calls ksqlDB.
The ksqlDB event streaming database became generally available on Nov. 20 and builds on the vendor's expertise in streaming technologies, including its KSQL query language for streaming data, as well as the Apache Kafka open source streaming data technology.
Kafka is widely used to stream data, though there are many ways that users use the data and pull it into different types of databases. With ksqlDB, Confluent is providing what it is positioning as a new type of database that is specifically built for event streaming.
Maureen Fleming, an IDC analyst, said ksqlDB will be useful to enterprises but is not strictly a new type of database.
"I consider this a bundle consisting of event stream processing and an optimized database that is useful for event streaming use cases," she said.
There is a need now for better ways to handle streaming data and that's where ksqlDB could be a fit for database users. Demand for technology such as ksqlDB is driven by use cases aligned with faster and more responsive cycle times and increasingly distributed applications and systems that need to work together more seamlessly, Fleming said.
"Today, it takes a good amount of expertise in every element of collecting, ingesting, processing and evaluating streams of data," Fleming said. "Anything that changes that complexity dynamic is important."
The need for an event streaming database
If you look closely at nearly any event streaming system today, you'll find a cobbled mess of piecemeal systems from different vendors, said Michael Drogalis, product manager at Confluent. Many event streaming systems include different subsystems for extracting, storing, processing and querying data.
"Developers have to know the ins and outs of each of these subsystems, making integrations and scaling extremely cumbersome," Drogalis said. "It's like a car built out of parts that come from different manufacturers who don't talk to each other."
Confluent's goal with ksqlDB is to create a single technology that alleviates the complexity and problems usually presented by building stream processing applications. Drogalis said that Confluent consolidated the various subsystems, so developers need just two components: ksqlDB and Apache Kafka.
Kafka and ksqlDB
Confluent's core platform supports and extends Apache Kafka for data streaming to create a new type of database that's specifically geared toward event streaming, Confluent used critical elements of Apache Kafka for ksqlDB's foundation, such as its durable data storage and streaming processing runtime. Layered on top of that foundation is ksqlDB's remote SQL API, which executes queries, controls connectors and generally manipulates data.
Maureen FlemingAnalyst, IDC
One of the innovations in ksqlDB is a new pull query capability.
"The pull query function is analogous to a select, or data retrieval, query against a traditional database," Drogalis explained. "Pull queries allow an app to obtain a result that is true as of now."
Drogalis noted that ksqlDB also supports push queries, which enables an app to issue a query and subscribe to a stream of result changes over time. Both pull and push queries are important for organizations that need real-time information and a historic view of their data.
Also of note with ksqlDB is the extensibility of the database, with built-in connectors to other commonly used technologies, such as Amazon S3 storage.
"With other event streaming database platforms, developers need to write custom code for each connector," Drogalis said. "They can now skip those steps because ksqlDB is capable of running any connector in the Kafka Connect ecosystem."