RSAC 2025 to center on agentic AI, GenAI in security
If AI continues to become more accurate and secure, automation and self-healing systems that strengthen security programs could be the future.
As one of the industry's most anticipated events, the RSA Conference (RSAC) agenda is packed with sessions and learning activities led by practitioners and leading security vendors.
Agentic AI will be a key new technology for organizations to embrace, and I expect AI to dominate discussions in my coverage areas of cloud and application security.
AI has garnered a lot of buzz at past conferences, as we've gotten excited about the idea of having tools that can take over manual, time-intensive tasks. However, we've seen this excitement coupled with some challenges. For example, the excitement around using machine learning to train models to perform tasks such as catching abnormalities or performing spam filtering was met with the challenge of not having enough data to train the models.
We also saw hype around generative AI (GenAI), which can perform more assistive tasks to save time and generate content based on LLM training. GenAI has some concerns around accuracy and data input, as well as issues such as hallucinations, but it is widely used.
AI adoption
Enterprise Strategy Group, now part of Omdia, conducted research on the effect of GenAI on application development and found that 99% of organizations use or plan to use GenAI, with 63% of organizations saying they use GenAI tools in production today.
Use cases include faster/easier debugging, code translation, and improved customer support. Over a third (36%) use it for faster code creation, with the majority (89%) of those using GenAI finding improvements with faster code creation.
Our research on the future of security operations in an AI-driven world showed that nearly three-quarters (74%) of organizations utilize GenAI-enabled tools in areas such as detection rules engineering, automated attack mapping, data enrichment for alerts, and attack path analysis.
The promise of agentic AI and trends
Whereas GenAI is often used for assistive capabilities via a chat interface, agentic AI takes things even further as an AI system that can autonomously perform tasks, with minimal or no human intervention. This opens up a world of new possibilities and applications.
For example, there is the new trend of "vibe coding" where people without coding skills build applications simply by describing the application they want to build using prompts; the AI can generate the code and build the application.
Developers can also build more complex agentic applications than traditional applications because they can perform tasks without human intervention. For example, a doctor's office could build an agentic application to automatically interact with patients and schedule appointments.
Security implications
While agentic AI opens new possibilities, as with all innovative technology, we must make sure it can be used securely.
Due to its autonomous nature, agentic AI will need guardrails to mitigate risk. With increased productivity and the ability to build more complex applications comes complexity with scale and added attack surfaces. Security teams will need to be aware of the need to secure access points, APIs, and data sources.
We can also expect hackers to try to take advantage of possible vulnerabilities to attack in areas such as prompt injections and ways to take over the agents to wreak havoc on the tasks they perform. These attack vectors span multiple areas, including application security and software supply chain security, as more code is generated, identity and access management especially with nonhuman identities (NHIs), and API security. Hackers can also use AI to scale their attacks.
Cloud service providers, including AWS, Google, and Microsoft, will continue to tout their AI capabilities and related security offerings as they compete for leadership. We can also expect traditional security vendors describing their capabilities to support secure use of GenAI and agentic AI, including Trend Micro, Palo Alto Networks, CrowdStrike, SentinelOne, Qualys, and Tenable.
Plus we'll see new vendors specializing in security to track and secure AI. Some newer vendors focused on this include Endor Labs, Portal26, Straiker, Protect AI, Prompt Security, CalypsoAI, and Lakera.
Deploying agents to help security teams
In addition to securing AI, we can also expect security vendors to announce AI capabilities within their software. Numerous tools already have GenAI assistive capabilities, but we can also expect vendors to articulate their plans to apply AI, whether it's GenAI assistive tools to help security teams, or plans to release agentic capabilities to perform tasks.
For example, just a few weeks ago at Google Cloud Next, Google announced an alert triage agent in Google Security Operations to autonomously perform alert investigations. This will be valuable as our research shows security teams are inundated with increasing alerts and must ensure they can remediate issues in time to prevent an incident.
Google also announced a malware analysis agent in Google Threat Intelligence to analyze malicious code and autonomously create and execute scripts for deobfuscation. Both are new and in limited availability, but we can count on more agents from cloud providers to reduce the burden on security teams.
For example, just this week, we've seen application security vendors release agentic AI capabilities to better manage rapidly scaling amounts of proprietary code and third-party code, including open source software for software supply chain security. Software supply chain security vendor Lineaje released agentic AI-powered self-healing agents that autonomously fix open source software, source code and containers. Application security vendor Cycode also released "AI Teammates" with four agents to carry out security tasks in detection, prioritization and remediation.
Moving to self-healing systems
There are always adoption issues with new technologies, and it remains to be seen how they will be adopted, based on factors including usability, accuracy, and, of course security. There is also the element of control and how much we still need humans in the loop.
Because of past challenges with automation and AI related to accuracy, the potential to break things, and the potential to be hacked, there are varying levels of comfort in letting go of that control, especially with agentic AI, which allows AI to work autonomously without human intervention.
I expect to have many discussions in these areas with security leaders who want to support their organizations' use of AI to gain productivity and competitive advantages but also need to manage security risks effectively.
Where this is headed is very exciting. If we can get to the place where AI is accurate, smart, and secure, we are headed toward automation and self-healing systems that strengthen our security programs. Does that concept scare you? High risk usually offers potential for high reward. As security professionals, we are the ones to think about and address risk.
My esteemed Enterprise Strategy Group colleagues, Principal Analysts Dave Gruber and Tyler Shields, are presenting an RSAC session on AI-driven exposure management and the path to self-healing systems. They will tie in their recent research to what is needed to get to this ideal future state.
For those going to RSAC next week: have a fun and safe event! Please say hello if you see me or my teammates as we get our miles in walking around the conference.
Melinda Marks is a practice director at Omdia’s Enterprise Strategy Group, where she covers cloud and application security.
Omdia’s Enterprise Strategy Group analysts have business relationships with vendors.