Arctic Wolf CPO: Most AI deployment is generic, 'pretty weak'
Dan Schiappa, chief product officer at Arctic Wolf, said that while generative AI technology has enormous potential, many companies are deploying it for the wrong reasons.
SAN FRANCISCO -- Given the massive number of companies pitching "AI" in 2024, it comes as no surprise that the term itself has become something of a buzzword.
Dan Schiappa, chief product and services officer at managed detection and response (MDR) vendor Arctic Wolf, is one of thousands of people in security tasked with cutting through the noise. At RSA Conference 2024 this month, a significant portion of vendor booths on the show floor touted the power of "AI" in their respective company's offerings. Not all of these vendors are necessarily using AI, and what AI as a term even means can vary dramatically.
Schiappa, who joined Arctic Wolf in 2022, sat down with TechTarget at the conference to discuss this challenge, as well as how Arctic Wolf utilizes AI. He also touched on the company's new risk assessment tool, the Arctic Wolf Cyber Resilience Assessment, announced as part of RSAC.
What necessitated your new risk assessment tool?
Dan Schiappa: One of the key things we're seeing is that transferring risk is a key element of getting [cyber] insurance. One of the things we learned was that the whole security market is pretty nascent. It's the equivalent of getting life insurance and the [insurer asking], 'Do you smoke? And do you cage dive with sharks or no? OK, well, here's your policy.' And so, we're using this tool to help bridge the gap between you understanding your actual cyber risk environment and translating that information into becoming more insurable.
One of the things we talked about with our Concierge Delivery Model is this concept of a security journey. Our concierge team is an interface to our customers; they do what we call security posture in-depth reviews or SPIDRs, and they help customers along this journey to get good security hygiene and to make sure they're doing all the right things. That feeds into this tool as well. We look at the third-party tooling, make sure you're using the right configurations and we're not going against the NIST framework. We understand your overall risk posture, and we have interfaces now to the insurance carriers so they can provide that information and get discounts on their insurance policies.
As someone who's involved in the management process [being in the MDR space] -- even if you're not doing the managing yourself -- is your job is less about managing a customer's security operation entirely and more about taking them on a journey to control their own security posture?
Schiappa: It's a mutual responsibility model. Absolutely, yes. Even customers who have no security expertise whatsoever, we still have to help them set up their ecosystem appropriately. We'll walk them through that. Some of our customers are super sophisticated and we're doing way more mutual management. And then there are others who say, 'I don't know what I'm doing. I'm a plumbing supply company. Just do it for me.' But we still have that mutual responsibility model of helping them set up and manage their security journey.
What else is top of mind for you right now, security-wise?
Schiappa: I think the AI aspect is a big, important thing for us. Our industry has been an early adopter of AI for a long time, and we've used it for many, many years to do anomaly detection, malware detection and all sorts of stuff like that. We're seeing now a huge improvement in the ability to deliver AI, but I think the thing that worries me the most, just in this industry, is that I always joke about how buying the .ai domain name does not make you an AI company.
People are saying, 'Hey, if I use AI, my multiple goes up.' And they're trying to take an AI technology and find a reason to use it instead of finding a problem and understanding which type of AI is going to help solve the problem. Generative AI is a cool thing. That's good for some things; it's not good for all things. There are different models -- deep learning, neural networking, casual ML, Bayesian models -- and there are also different types of models you can create to solve different problems. I just see this kind of genericism of AI around GenAI and LLM, and it just shows to me that the level of sophistication of how people are deploying AI is pretty weak. And it's more like tabloid headline-type AI adoption, as opposed to building it to solve the right problems.
I imagine that must be challenging for you as a CPO, someone who's involved in the messaging process, because the definition of AI has become meaningless to some extent. As someone who has a role in deciding how your product is positioned and as someone who works with AI, how do you think about that?
Dan SchiappaChief product and services officer, Arctic Wolf
Schiappa: Yeah, and I do worry about that. Because in the truest definition of AI, it means sentience. And we're not there. I think AI has become an umbrella term for anything that uses mathematical analytics to solve a problem. And I think there are elements of machine learning that aren't AI but are very powerful and very capable. And that gets lumped into an AI brand. There are some simple security analytics that you're doing, some real basic stuff that has a huge impact in business and that gets lumped into AI. It's a very loosely applied term.
And I believe that's where people get away with claiming AI capabilities without really having AI capabilities. The area that has really brought AI to the forefront is the GenAI. It is the ChatGPT stuff where people can actually touch it, feel it and interface with it. The vast majority of AI and cyber in the past always happened behind the scenes. You didn't see it. You saw the results of it, but you couldn't actually interface with it. And so, I think people are so maniacally focused on the former piece that they're missing some of the really cool innovations you can do that are still behind the scenes.
How have you, as a product person, been trying to cut through that noise?
Schiappa: We talk a lot about what we're doing with AI and what problems we're solving, I think that's the key: what problem you're solving. We look at it in a few categories. One is reducing the volume of data. We do about five and a half trillion security observations a week. You cannot possibly process that with humans. How do we take that and make the workload manageable? A typical customer of ours gets about one alert a day. A SIEM, for comparison, you can assume they're probably getting hundreds to thousands of alerts a day. Filtering all that data down to what matters and being able to action it is driven by AI.
The second piece is how to correlate similar data together. How do we know that the signal from an endpoint is related to this signal on the network, which was related to the signal from the cloud? Putting those things together, providing that to a security analyst is context. As they're making security decisions, they have the context of the full view, or they're taking that correlated data and providing it to another AI model, which is doing detections and remediation across multiple attack surfaces. The last piece is in interfaces. About 40% of our tickets in our SOC are inbound customer questions. How can I create a chatbot that helps us filter through that and remove the toil from our security operators so they can go focus on the real deep security things?
Then how can I take things like natural language and create that as a query interface to our security operators, and to our customers, so they don't have to know SQL or what our schema is? They can just say, 'Show me all the servers that talked to this IP address in the last three minutes.' And they get their answer.
Editor's note: This interview was edited for clarity and length.
Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.