kras99 - stock.adobe.com

Amazon CISO discusses the company's cautious approach to AI

At the recent AWS re:Inforce 2024 conference, Amazon CISO CJ Moses spoke about the risks and threats associated with new AI technology and how the cloud giant addresses them.

While Amazon is one of several companies driving the current AI revolution, security leaders at the company are using extreme caution when it comes to deploying third-party technology.

At the re:Inforce 2024 conference earlier this month, AWS executives highlighted the cloud provider's longstanding security culture as a major differentiator and detailed many behind-the-scenes practices and offerings that benefit customers. For example, AWS publicly unveiled Sonaris for the first time, an internal tool that analyzes vast amounts of network traffic including scanning attempts from threat actors.

AWS executives also discussed how they prioritized security throughout the company in an effort to stay ahead of emerging threats -- even if it means migrating from industry-standard platforms. That philosophy extends to AI deployment, too, according to Amazon CISO CJ Moses. Despite the enormous potential and omnipresent buzz around new AI technology -- particularly generative AI offerings -- he said organizations should be aware of the significant risks such products and services may carry.

In this interview, Moses spoke about how Amazon carefully vets new AI technology. He also discussed tools like Sonaris, as well as AWS' push to require MFA for customer accounts.

Editor's note: This interview was edited for clarity and length.

You talked about Sonaris for the first time publicly at re:Inforce. Are you considering possibly turning that around into a more customer-facing offering?

CJ Moses: Yes and no. I would say yes, we already have -- in the sense that every bit of that threat intel we get is pushed out to defense. An example of this is if you store anything in S3. Things that are stored in S3, before we did this, would get enumerated. And those enumerations would mean that threat actors, through the DNS tables, would be able to tell what you named your buckets and therefore potentially, if the buckets were public, access them as well. And we had problems with that. In the last 12 months or so, we've blocked around 24 billion attempts at enumeration just on S3. If you take that 24 billion and then look at it from an EC2 perspective, that number is now 2.6 trillion. We did this and, again, we haven't talked about it until now. We just created the capability and customers are getting all the benefit out of that.

The best security our customers can have is the one that they don't know about and don't have to pay for.
CJ MosesCISO, Amazon

The best security our customers can have is the one that they don't know about and don't have to pay for. I'm sure the profiteer people and the people that have to sell stuff like that don't like to hear that. But the reality is that the platform and security capabilities we provide need to be secure for them to be of value. Therefore, giving customers as much of that as we can give without creating fear, uncertainty and doubt in the cloud -- and especially in our cloud -- is exactly what we want to do. If customers do want additional capabilities, they can use [Amazon] GuardDuty and they can use Inspector, because that threat intelligence that we get from Sonaris and from MadPot is pushed into those areas.

We get significant [threat intelligence] hits on entities that aren't our customers. We give them a phone call, we send emails and we reach out to our networks. We had a multinational fast food restaurant that wasn't a customer of ours, and we observed some very significant activity that they needed to know about -- APT [advanced persistent threat] activity. We notified their team, and their team said, 'Hey, we got it. We know about it. We're good.' Later on that day, we're still seeing significant and increasing amounts of activity, and the two teams talked again. They said, 'No, we're good. We got it. It's all gone.' No, it wasn't. And they wouldn't accept it, so we escalated it through me. I got a hold of their CISO at around 2 a.m. We had a phone call, and we talked. He said 'OK, but my team tells me we're good.' I sent him an email with fresh logs that I had just pulled. And subsequently, he pulled his IR [incident response] leader in, and I pulled my threat intel leader in, and we had a conversation. And our teams worked together through the night and made sure that they were taken care of as best we could from afar. They weren't a customer before that. They are now.

It seems like as cloud services have gotten bigger, the more the shared responsibility model is shifting in a direction that puts a lot more on you, the cloud provider. Take the MFA requirements you've rolled out. You're essentially protecting the customers from themselves. I would argue it's not your job, even though it's good business for AWS.

Moses: It may not have been our job. And I love when the shared security model comes up in discussion, because I was the one who drafted the first version when I wrote the very first AWS security white paper. Adam Selipsky, the former AWS CEO who was in a different role at the time, and I went back and forth over this one paragraph. Why that paragraph had to exist was because I was having conversations with our customers about these questions. What are the responsibilities of the customer, and what are ours? They needed to know. And the basic paragraphs that became that model for AWS were based upon customers wanting to understand what their responsibilities were. And it made sense that we would need to essentially create a matrix for each service to some extent. And back in those days, it was EC2 and S3 -- these are very much infrastructure services, so it's pretty easy. The further we move up the stack and do things that are beyond software as a service, the more that is in our space of responsibility. And the way I've always liked to explain it is this: As a customer, if you have access, then you have responsibility.

If you're an online banking customer, the bank, when providing capability to you, is responsible for their data centers all the way through to your login and password in terms of protecting your money. If you as the customer failed to protect your user ID, password and MFA -- preferably not SMS-based MFA, but I know a lot of banks still use it, and at least it's better than nothing -- at that point, all of your money can still be exfiltrated to someone else. There is a responsibility on the customer. But at the same time, that's where you see banks doing MFA of various flavors and capabilities, because they also realize that you as a customer are going to have a lot of pain. You're not going to be very happy with the bank, regardless of whether it was their fault or your fault. And therefore, as much as we can, we as a provider are going to protect customers from themselves, educate them and do everything that we can.

Regardless of if it's our responsibility or not, I think that we feel responsible to do all that we can. And quite honestly, it is good for business as well. From our perspective, it also goes back to a more important approach -- that if we can do it, then we should. And in many cases, we do. There are a lot of things that we're going to continue to do, and we're going to continue to add to that.

It seems like every company is adding AI to everything these days. In your position as a CISO, when you're engaging other companies and testing different third-party products and services, how do you cut through all the noise and assess the technology?

Moses: If we are looking at vendors that are using AI of any sort -- especially GenAI, given its pervasiveness now, where it's buzzword bingo -- then we have an entirely different group of questions and discussions that are going on. And a lot of those questions come down to, first, figuring out if they're actually using it. Everybody says, 'Oh, I'm using AI or GenAI,' or whatever the hype word of the day is. And then we start asking, 'Which models are you using? And which tables are you using and how you're protecting the tables? What kind of guardrails do you have?' These are the questions you ask.

And normally, after the first level of questions, the companies say, 'Wait a second, let me go get the AI person.' Then you start to figure out whether they are doing true AI. And honestly, in many cases, the companies say, 'We've been dabbling, but we haven't really put it into the product that you're using yet.' And our response is 'Good -- don't.' Right now, we're not in a position where we believe that many vendors, customers or otherwise are ready to do that. There's still a lot of work that needs to be done in that space. If it's a vendor that's doing AI on [Amazon] Bedrock or AWS, we then know the questions and the things that we can work with them to make sure that it's done properly. And also, with the data that's being placed in there, when it's something that they're using for us, making sure that we still retain ownership of said data. If it's a vendor that's using it on our behalf, we require in our contracts with them that we retain ownership of that -- not only of the data itself, but of the model that they're training based upon our data.

Amazon CISO CJ MosesCJ Moses

These are all things that you have to pay attention to when doing AI. And I think that's really the buzz right now. If you see some of the recent [AI] announcements like the Apple [Intelligence] announcement, people are freaking out and asking what it all means in relation to those questions. You have to get the lawyers down into the data and figure out who actually owns the data that I put into my phone.

Is it now going to be somebody else's data? Those things will be figured out in time. But from an Amazon perspective, we actually do the reviews, and we ask the questions. And with some of our current vendors, we've seen some of their [AI] enhancements. And luckily, there are enhancements that are minimal and not deployed everywhere. And with a lot of our deployments, we have contracting involved such that if they want to add such things, they have to add them in a fashion that allows us to own the information and doesn't allow it to go any further. We've seen some of the announcements of new AI additions to things we're using, and we automatically reach out through the account managers to find out if it has been turned on in our environment -- and thankfully, to date, the answer has been "No." And secondly, we get lawyers together to make sure that stuff doesn't get turned on without us being able to review it.

You want to make sure that you're not putting proprietary information someplace where you're losing control that proprietary information. That's why from the AWS perspective, with Bedrock and all of our GenAI environments, you retain ownership of your data. That's been one of the foundational things for us. But regardless of the platform, it comes down to making sure that from a legal contract level perspective, your vendors understand you retain your ownership and controllership of your data. And the second part of that is the models; if the models are trained on your data, and the vendor retains ownership of the model, they are actually retaining some portion of your data as well. And that has to be included in the threat model. And not everyone understands that.

To be clear, you're not asking your third-party vendors not to turn on or add AI because you're against the technology -- you're saying there are a lot of steps you need to take first.

Moses: Yes. There's due diligence. Just like any with application we were to use, we want to do an application security review on it. This is the same thing; we now have another module of application security review for anything that has AI of any sort now, because who knows what variants you have, in order to ensure that the right guardrails are in place. We want to make sure we're protected from training someone else's model and losing IP or data from that, as well as the data itself. And if you can work through those steps and those are all good, then I think there's lots of value in AI and there are really good things that you can have. Just like anything else that's new, you have to do the due diligence. And we definitely are. If you don't, that's going to be bad.

Rob Wright is a longtime reporter and senior news director for TechTarget Editorial's security team. He drives breaking infosec news and trends coverage. Have a tip? Email him.

Next Steps

Amazon expands MFA requirements to AWS member accounts

Dig Deeper on Security operations and management