Getty Images/iStockphoto

Monte Carlo set to boost data observability with $135M raise

Monte Carlo's CTO provides insight into the state of the data market as the volume of data sources and pipelines continues to be a challenge for organizations to optimize.

Data observability vendor Monte Carlo on Tuesday said it raised $135 million in a series D round of funding.

In 2021, the San Francisco-based vendor raised $25 million in February in a series B round and a $60 million series C in August. Monte Carlo's technology enables organizations to have better visibility into data pipelines.

The vendor also has been busy with product R&D over the past year, introducing a series of updates and features for its data observability platform.

In July 2021, Monte Carlo released its Incident IQ capability that provides root cause analysis for data pipeline failures. In November 2021, Monte Carlo Insights debuted, providing organizations with trend visibility for data pipeline usage. More recently, on April 7, 2022, the vendor introduced circuit breaker functionality to enable organizations to maintain the reliability of data-driven operations.

Monte Carlo faces an increasingly competitive landscape with multiple vendors, including startup Bigeye; Acceldata, which has raised $45.6 million in venture funding; and data reliability vendor Datafold, which also provides data observability capabilities. A key challenge that Monte Carlo and its competitors face is the need to provide more automation to enable organizations to more quickly fix data pipeline problems.

In this interview, Lior Gavish, co-founder and CTO of Monte Carlo, provides insight into the challenges the vendor faces and what's next.

Why raise more money now for data observability, especially in this volatile market climate?

What we're seeing is just incredible growth in the data space right now, and we've seen the emergence of data observability as a must-have part of the modern data stack.
Lior GavishCo-founder and CTO, Monte Carlo

Lior Gavish: What we're seeing is just incredible growth in the data space right now, and we've seen the emergence of data observability as a must-have part of the modern data stack.

You read all the headlines about inflation and increasing costs, and you have to wonder, does that mean that companies are going to freeze their investments in data? But we've actually seen the contrary -- despite all the market turmoil, the investment in data, infrastructure and data teams continues.

Over the last year, we have learned more about what we need to do. And we're definitely seeing a lot of new product requirements, mostly around growing the data observability stack and the need to help larger teams with data reliability.

We raise money based on the needs of the business.

What gaps in data observability is Monte Carlo still looking to fill?

Gavish: We can't help our customers if we don't support everything that they're using. Companies use a lot of different tools. So we're both extending our coverage for existing categories of tools that we've been supporting, including data warehouse and BI tools, and also building out robust support for data lakes.

Lior Gavish, co-founder and CTO, Monte CarloLior Gavish

We will also likely work on event streaming technologies like Apache Kafka and AWS Kinesis. We're focused on helping to better serve our existing customers and also addressing the pain points of customers we don't support just yet.

What do you see as the primary challenges of data observability?

Gavish: I think that the core problem is always just the complexity of data systems.

Data moves around through a lot of different pieces of the stack, through a lot of processing stages. Sometimes data can pass through dozens of processing stages before it gets consumed as an end product, and every single step of the way could result in broken data.

Complexity is the fundamental problem that we started to see two years ago, and it's only getting worse with time because organizations are building pipelines with more data sources.

Editor's note: This interview has been edited for clarity and brevity.

Dig Deeper on Data governance