AI Security Alliance urges clarity for buying AI security tools

Vendors and customers must be aware of potential gaps between expectations and reality in the sale or purchase of AI cybersecurity products, an AI security expert advises.

The explosion of AI security tools and the ensuing hype have led to confusion in terms of the realistic expectations enterprises should have for installing and using these products. Part of the issue might be a lack of standard expectations for what buyers need to have and what sellers can offer.

Kapil Raina, founder and chair of the AI Security Alliance -- which formally launched in August 2019 -- and vice president of marketing at Preempt Security, an authentication security firm based in San Francisco, told SearchSecurity both vendors and customers need to have common understanding and guidance when dealing with AI security tools in order to avoid unreasonable expectations or dissatisfaction with the products.

Vendors need to explain how AI models make the decisions, in addition to being able to learn to perform their specific tasks better over time and be fine-tuned. Enterprises considering an AI-based product purchase first should understand the data they will feed into the models.

Editor's note: This interview has been edited for length and clarity.

What was the intent behind starting the AI Security Alliance?

Kapil RainaKapil Raina

Kapil Raina: The main intent was to bring [together] organizations that might purchase products with AI in them or vendors themselves. The idea was to create practical guidance [for common vendors' questions, like]: If I have to sell AI to customers as a vendor, what are the things they're going to be expecting from me? What are the things they should ask me? Because the problem was: You don't have enough data scientists; you don't have enough data to test things right. So, it becomes very problematic.

From a legal point of view, how do you explain what the AI did and all the factors that went into it? Most vendors will make a black box because it's very difficult for the vendor to make it tunable, and it's a little bit risky from an [intellectual property] point of view. If I can see the algorithms and all the assumptions and parameters, it's a lot easier for me to explain.

What's really interesting is almost every major country in the world now has a set of guidelines around AI usage for national defense. Just like in security, the technology is one part; legal is the other. But, in terms of the sheer human capital and knowledge, China is throwing a lot of money at that. That's what we're seeing in Asia, and a lot of the universities are developing AI centers and expertise much faster than we are seeing in North America.

How difficult is it to get vendors on board with a standard definition of AI or to agree about when a product can be marketed as AI?

Raina: The problem is that AI is based on learning, tunability and the specific environment. Let's say one customer would have network traffic logs; another customer may have authentication logs; the third customer has both. The system would have to respond based on what was available, and it would almost never conclude what their human analysts conclude. So, from a vendor's point of view, they are almost never successful.

When we started the Alliance, all sorts of vendors were like, 'Sign us up,' because the vendors were struggling just as much in the actual sales process as the customers.

Folks were struggling to do the evaluation of the products, number one. And, number two, there were legal liabilities because they couldn't understand the technology well enough. They were taking an inherent transfer of risk from the vendor that they were using.

If you ask CISOs to look at the NIST [Cybersecurity Framework] as a general thing -- it's a framework that also becomes a legal mechanism because I can say: 'This is the general standard; I met it, and anything beyond that is unreasonable.' That type of general standard doesn't exist in AI.

What's the plan for the Alliance over the next year?

Raina: One question is: Should we open up to working groups? We have the VP of security at InterContinental Hotels Group. He's got one working group focused on what practical guidelines are -- for the security operations person or CISO -- and whether they're going to use AI in their threat operations.

Another working group is run by the senior engineering manager at Symantec. He's working on edge or IoT devices and looking at what the AI security implications are there. AI needs a lot of data to be effective, but when you have edge devices, that's problematic, especially if they're industrial systems. You're not getting the data back all the time, but the device has to start making decisions.

The second big plan is with ISACA. For 2020, one of its top priorities is AI expertise for its members. They want to work with us so we can give them a set of guidelines they can use for certification programs for their members. For them, it's a monetary value. But, more importantly, it gets the AI skills up to what the industry needs. The idea is to provide guidelines for groups like ISACA that they can then use to develop curriculums and certifications for actual individuals.

The Cloud Security Alliance has an AI working group. I talked to one of the board members at Black Hat who said the group has struggled to get that going. Part of it is branding. The person was talking about different ways we could use the momentum that we have to help them. Obviously, none of this is formalized yet, but one of the goals is to formalize these relationships.

Right now, the Alliance is an all-volunteer organization, so the third goal is to formalize that so we can accelerate into other regions, primarily focused in North America and parts of Western Europe, but make the Alliance really global to include different perspectives on the whole thing.

One of the issues you were mentioning earlier is being able to show AI security tools to potential customers. How do vendors deal with the time it takes a model to learn in order to be able to work properly?

Raina: The challenge is that you have pretrained models, but that's based on diversity of data. One of the challenges facing the security industry is that your customers are not going to share data with each other -- let alone with you.

Part of it is you bring data to train models, but then you have to have enough models that aren't just anomaly-based that can tune and learn. In general, that's a problem with AI products because you have to balance that out. You don't want to wait six months to see this thing work and then make a decision because then the vendor will be going out of business.

You have to eventually focus on a set of data, like Preempt focuses primarily on authentication data. It's possible to do that. It's not possible to do authentication, network and endpoint. It's just not.

If I need that level of data -- meaning the whole stack from application to user -- and I also want it over long periods of time because your behavior during Christmas will be different than it is in the summer, imagine the amount of data from all your systems that you have to eventually sort through.

One of the trends is that the larger cloud computing platforms are much more ready to take advantage of big data, big computing and the necessary scalability. You see a lot of venture capitalists familiar with AI, and one of the first things they'll ask you is: 'Where is your data?' People can create an algorithm, but the data really makes it useful. So, the AI models run the data and then security; that's always a struggle. That's why you're seeing advancements in many areas of AI, but security is taking longer.

Dig Deeper on Security operations and management