Getty Images

Alex Stamos on how to break the cycle of security mistakes

In an interview, SentinelOne's Alex Stamos discussed the importance of security by design and why it needs to be applied to emerging technologies, including generative AI.

Alex Stamos wants security by design to become an industry standard because, as he put it, we keep making the same mistakes over and over again.

Stamos, now chief trust officer at SentinelOne, has a 20-plus year history in the security industry. He previously served as CSO at Yahoo and Facebook, and he started the Krebs Stamos Group security consultancy in 2021 with former CISA director Chris Krebs after former President Donald Trump fired Krebs for challenging Trump's unfounded claims of widespread voter fraud following the 2020 U.S. presidential election. SentinelOne acquired the Krebs Stamos Group last fall, hiring both founders in the process.

Beyond this, Stamos is independently one of the most prominent voices in the infosec space. He most recently made a big impression with a LinkedIn blog he published in January in which he discussed Microsoft's prioritization of security revenue following a breach the tech giant suffered at the hands of a Russian nation-state actor tracked as Midnight Blizzard.

Last month, Stamos spoke with TechTarget Editorial to discuss security by design, a concept referring to the practice of prioritizing security in product development above all else. The concept is a regular point of discussion in security spaces. In May, for example, CISA announced that 68 organizations, including SentinelOne, had signed the cyber agency's Secure By Design pledge in which software publishers promised to make measurable progress toward applying secure by design principles to their organizations.

During the interview, Stamos also discussed Microsoft's recent security struggles and the risks of generative AI (GenAI).

Editor's note: This interview was edited for clarity and length.

Why is security by design important to you?

Alex Stamos, chief trust officer, SentinelOne Alex Stamos

Alex Stamos: I've been doing this professionally for over 20 years, and we keep making the same mistakes over and over again. We have new technologies, new waves of products. But every single time there's a new wave of product, we end up going through the same cycle of getting super excited about a new platform or a new design paradigm and then realizing that security was not something we considered too deeply in the first place. Then there's a huge amount of research by independent researchers, by people and companies, by academic researchers. And then the rounds and rounds of people promising to do better next time.

From my perspective, it would be nice to start to codify what we've learned from each of these eras of doing security so that when we get to the next time around, even if the technological details are different, the fundamental concepts are the same.

The phrase "secure by design" came up a lot at RSA Conference 2024. Microsoft used that terminology with their Secure Future Initiative. It came up a lot when companies discussed AI and CISA has been focusing on it too. How do you explain this moment that security by design is having?

Stamos: From the Microsoft perspective, I see it more as a marketing cover for the fact that they really just lost their way on security. They've made big executive-level decisions to prioritize revenue over shipping products that are safe from the beginning. For them, the biggest issue is kind of their addiction to security revenue, which isn't captured in that model in the way anybody else uses it. I set aside Microsoft from everybody else here because Microsoft has a real fundamental problem in that they sell fundamentally insecure products and then charge you extra to make them safe. That's not security by design. That's more about greed getting the best of their product design decisions.

For security by design overall, one of the reasons you're seeing it is because CISA has been effective in utilizing it as the overarching terminology for trying to get companies -- especially companies that sell enterprise products -- to wipe out entire fundamental classes of vulnerabilities. I do think one of the reasons you saw a lot of it is that CISA has been effective in getting this into White House documents and into pledges. They've also been using security by design as an overarching idea that includes what we used to call secure development lifecycle, or SDL.

How is SentinelOne engaging with secure by design principles?

Stamos: One thing that we all learned from SolarWinds is that the adversaries, from both the very high end all the way down to your standard ransomware actors, have figured out that going after the supply chain -- either code supply chain or cloud supply chain -- is a great way of getting a lot of benefit from a relatively small amount of effort. We have tens of millions of machines that are running our agents. We have tens of thousands of customers that are critical that trust us by installing their software and using us as their security products.

We know that we are a target of the highest [order] for adversaries, so it's important for us to demonstrate publicly that we are doing everything we can first to prevent those attacks and wipe out entire classes of flaws in our products and then promise that if something bad happens, we're going be honest and open about it. And those are all the big parts of CISA's secure by design pledge. None of those things were things that are completely new to us, but I liked how CISA put it all together into a cohesive whole. And it's important for us to demonstrate that as a security company, we have an obligation to help support CISA's efforts here of standardizing security by design. That's why we're participating.

Are you optimistic about companies implementing security by design when it comes to generative AI?

Stamos: Generative AI and AI in general is a huge benefit for defenders. There is currently a significant defender's advantage by using AI. And a lot of that will be durable, although the bad guys will catch up. One of the reasons why we're way ahead is that companies have been spending, collectively, billions of dollars over years and years of doing this research. AI has been something SentinelOne has used since the beginning -- not GenAI, mostly classifiers. But the whole idea of SentinelOne was, instead of having signature-based detection, you train AI based upon behaviors and you look for behaviors both in the endpoint and in the cloud.

Anybody who tells you that they can completely secure your AI install is lying to you.
Alex StamosChief trust officer, SentinelOne

It took years and years to do. As a result, defenders have a have a head start, and that's good. Lots of the uses of generative AI now are around workforce enhancement and making individuals much more powerful and much more productive. That is something we absolutely need in security. The number of companies that have a fully staffed, multi-tier SOC [security operations center] that can handle all their alerts and handle their investigations in house and do so at the level necessary is incredibly tiny. Allowing a smaller number of people to use AI to become much more efficient in their work and then eventually allowing AI to make your defensive decisions yourself -- under the supervision of humans -- is the way forward for us to make things better.

As far as the security issues caused by AI, this is an interesting challenge because we're super early in understanding the adversarial mechanisms you can use to manipulate generative AI systems. Anybody who tells you that they can completely secure your AI install is lying to you. Because from my perspective -- again, being the old guy -- this is like the late 90s and early 2000s where if somebody in 1999 said they can build a perfectly secure web application, they might not have known it, but they were completely lying. That's because three quarters of the interesting flaws in web apps had not been invented yet. That's where we are with GenAI. Until we've got a couple of decades of research and vulnerabilities, you can't have a lot of confidence.

If you're putting GenAI in place, first, doing so in places where it can't be potentially manipulated by adversaries is critical. For a lot of enterprises, they're thinking about AI as an internal workforce thing. That's great because if you have semi-trusted insiders already, it reduces your risk. If you put an AI in a place where bad guys can talk to it, that is risky right now.

Second, you have to build a risk management framework that's humble about the fact that we don't understand how these systems work right now. We don't understand how they can be manipulated. You have to watch everything that's going on and move quickly if you notice some kind of new manipulation.

Do you feel like we're making progress toward high-level executives -- like those without security backgrounds -- prioritizing secure software development over short-term shareholder gains?

Stamos: Between SEC rules, lawsuits and big attacks, boards have understood that they have a direct security responsibility. The problem is they don't know how to manage it yet. They're slowly moving into a model where they have the structures and people necessary to actually understand and manage their security teams. For boards, the critical things are, one, you should have a technical risk committee that is separate from the audit committee. Auditing -- people looking to see whether money has been stolen and whether the accounting makes sense -- is totally different from helping manage a team that understands all the adversarial risk issues and deals with them.

And two, have a technologist on your board. A couple of boards have started this, but they're moving slowly. Tech execs always want CEOs, chief marketing officers, and people who really bring value to the flashy moneymaking side. But having at least one person who can sit there and watch the CISO give their 80-slide presentation and know whether they're being [misled] or not is critical.

It is extremely rare that I run into a board where I feel like they are able to effectively manage the CISO. And that is something boards have to really buckle down on -- having the right technical skills on the board to absorb and manage and give useful feedback to a security team.

Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.

Dig Deeper on Security operations and management