Getty Images

Data pipelines feed IT’s observability beast

Amid data growth, cloud complexity and demand for advanced automation, the data pipelines developed to satisfy the appetites of AI apps also serve observability tools.


Listen to this story

Growth in the size of corporate data sets and the increasing impact of digital apps on business bottom lines have IT teams borrowing from the AI world to keep pace with observability.

Data pipelines are a component of DataOps, an organizational approach to data management which arose out of the need to optimize data sets for big data analytics to enhance their business value. DataOps applies many of the Agile and DevOps principles familiar to software developers, such as breaking down silos between data sets, encouraging self-service and collaboration, and embracing IT automation for repetitive tasks.

Data pipeline tools already well established in this field include the open source project Kafka and Amazon's Kinesis. These frameworks automate and standardize the process of gathering, transforming and migrating data from its source into repositories optimized for AI.

Some early adopter companies, such as Ticketmaster, have used Kafka to feed observability systems for several years. But the practice is now going mainstream as more enterprises create microservices applications and work with distributed cloud-native systems such as Kubernetes, according to IT experts.

"When you get into hybrid environments, multiple cloud regions and complex suites of applications, [it's important to] properly manage what is often very important business data now," said Gregg Siegfried, an analyst at Gartner. "It's not just, 'Is my stuff up or down?' but using telemetry to understand how well your business is performing."

Vendors go all-in with observability pipelines

Data pipelines offer a systematic approach to collecting data from multiple clouds, regions and sources about the entire IT infrastructure, including networking and storage as well as applications. Some data pipeline tools for observability also look to create cost savings on back-end data storage systems by removing unnecessary data before it's ingested.

This market segment, which Siegfried has dubbed telemetry pipelines, has grown especially rapidly over the last 18 months, he said.

These newer vendors include Edge Delta, Calyptia and Mezmo. APM vendor Datadog also launched its own observability pipelines in June 2022. Cribl, founded in 2017, is considered the earliest mover in this field, Siegfried said.

ESG spending intentions survey modernization priorities for 2023 include AIOps
AIOps is among the most significant areas of investment in data center modernization for enterprises in 2023.

Mezmo is now staking its business, realigned from its origins as log analytics vendor LogDNA, on that trend. To differentiate from general-purpose data pipelines such as Kafka and Kinesis as well as telemetry pipeline commercial competitors, Mezmo is developing a set of tools specifically geared toward observability for cloud automation that are based on open source, according to Tucker Callaway, the company's CEO.

"We have an interesting opportunity to correlate data across streams while it's in motion to drive telemetry-related workflows," Callaway said. "We'll provide a set of correlation [features] out of the box for observability and security events. But store data in open formats so that customers can still own it."

Mezmo's new product, which is still a work in progress, is slated to offer Kubernetes log data pipelines this quarter and is expected to add support for metrics and trace data and non-Kubernetes environments in subsequent releases. Ultimately, Callaway envisions linking Mezmo's data pipeline with AI and machine learning frameworks, such as Apache Flink, for streaming data analysis.

Despite DataOps advances, jury's still out on AIOps

If IT automation based on data analytics sounds familiar, that's because it's also been the value proposition for AIOps platforms for the last five years. But it's also taken at least that long for these tools to refine their algorithms enough to effectively handle low-level IT infrastructure automation, such as system restarts, in early adopter data centers.

Still, the aspiration toward further AI-based automation remains alive, according to Enterprise Strategy Group, a division of TechTarget. AIOps was named among the top priorities for data center modernization by 29% of the 742 respondents to Enterprise Strategy Group's 2023 Technology Spending Intentions Survey. At the same time, however, AIOps did not rank among the overall top drivers of IT spending for 2023 among those respondents.

It's not just, 'Is my stuff up or down?' but using telemetry to understand how well your business is performing.
Gregg SiegfriedAnalyst, Gartner

For now, enterprise ops teams are taking incremental steps toward more advanced telemetry-driven automation with a focus on better understanding complex infrastructure to quickly assess the root cause of issues. The full AIOps vision of self-healing systems remains far off in the future -- if it ever comes to fruition, according to some enterprise IT pros.

"I haven't been too much onto the AIOps bandwagon yet," said Andy Domeier, senior director of technology for SPS Commerce, a Minneapolis-based communications network for supply chain and logistics businesses. "We need to make sense of our inter-service and third-party dependencies. Doing that accurately, quickly and automatically at scale requires us to have pristine telemetry and [system] health data, and we're still working on that."

Elsewhere, another IT pro building advanced automation with Datadog Workflows, also rolled out by the Datadog last year, said that tool could be used to build toward AIOps-style auto-remediation. But he isn't entirely sold on the AIOps concept overall.

"AIOps is a heavy term, because at the most basic level, it is using machine learning to figure out when metrics are out of the norm," said Jeremy Stinson, chief architect of SaaS at Precisely, a data integrity software vendor based in Burlington, Mass. "But when you are collecting about a million individual metrics per minute that it isn't correlating or giving a suggestion for getting fixed, that limits the value. I see the industry evolving towards being able to offer more value. But it is really hard as infrastructures are constantly evolving."

Meanwhile, it's debatable whether the observability tools that claim the AIOps label are in the same realm as AIOps platform tools such as BigPanda, Moogsoft and OpsRamp that meet Gartner's strictest definition of AIOps platform, Siegfried said.

For example, AIOps platforms considered representative of the category in a 2022 Gartner market guide had to offer machine learning analytics at both the point of ingestion and on stored historical data. Many observability tools are more focused on one or the other to date, according to Siegfried.

That said, the overlap between these two categories is growing with the adoption of advanced data management techniques, including data pipelines, he said.

"We are seeing more AIOps platforms ingesting raw telemetry as a way to provide additional context about events," Siegfried said. "Our ability as an industry to better manage data, better understand data and more quickly process data will also allow monitoring tools to do things like identify anomalies much more easily."

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.

Next Steps

Data pipelines deliver the fuel for data science, analytics

Enterprises rework log analytics to cut observability costs

Dig Deeper on Systems automation and orchestration