Agentic AI and Leadership: How I'm Preparing as a Compliance and Security Leader
Checking in:
For the past few months I’ve seen the phrase Agentic AI everywhere: vendor emails, ads, in someone’s post while randomly scrolling on Linkedin. At first, I ignored it. It felt like another buzzword that didn’t have much to do with the very real, very immediate challenges on my plate as a Compliance and Security leader (and honestly, life was life-ing and I didn’t have the space to slow down and dive deep into the topic in the way I wanted to).
But then one email caught my attention. It pitched agentic AI as a way to automate GRC tasks; more specifically the kind of tasks my team member was neck-deep in: audits, evidence collection, and all the behind-the-scenes monitoring that keeps our programs running, with a tool we already purchased.
I thought it would be a great topic for this month’s newsletter and to bring you along the journey of learning, especially for those aspiring for Senior or Executive roles in Cyber/Compliance.
So What is Agentic AI All About?
After reading that email, curiosity kicked in and I did what most of us do when we’re trying to make sense of a new tech trend - I googled it. I am a open-the-dictionary kind of girl when it comes to learning (something my grandfather instilled in me growing up; whenever I had a question about something, he pointed to our extensive and expensive encyclopedia set for answers) and wanted to know what the term agentic actually meant. After skimming a few articles, I started piecing together the difference between generative AI and agentic systems. The more I read, the more I saw possibilities (and of course, problems, because am I really a security professional if I don’t see challenges in things lol). Here’s a bit of what I learned:
Over the last couple of years, we’ve all gotten used to using generative AI tools; think ChatGPT, Claude, and Notion AI to name a few. Generative AI is a type of artificial intelligence that creates new and original content by learning patterns from large amounts of data. Generative AI tools typically wait for us to give them a prompt and then they respond back with an output: perhaps some code for a project you are creating, or an answer to that pressing question you had, or perhaps a recipe you asked to create for dinner tonight. They’re helpful tools, but they’re still pretty reactive and rely on us to tell it what to do.
Agentic AI is a totally different beast. Instead of just responding back to us based on a prompt we enter, it will take a goal you tell it and actually go do it. Think: your own personal digital assistant who can check and respond to your emails, set up an appointment, and schedule your uber eats pick-up order. Here’s how it works: The AI Agent (whenever you hear Agentic AI, think AI agent; for the purposes of this article they are one in the same) receives a signal (this could be event-driven, scheduled, or continuous) from whatever system it is integrated with, figures out what to do next with that information by making a plan, and then it takes action based on the plan it created. In addition to that mind-blowing information, the AI agent will also adjust itself based the results it achieves.
It sounded very cool. And also very scary. And that’s when my leadership brain kicked in. Because if these agents are out there making decisions, I can imagine all types of things that can potentially go wrong. I also was curious about things like accountability, establishing trust, guardrails, current use cases, challenges, benefits, and more importantly how will it affect my day-to-day, both personally and at work. So I took my learning a step further.
Getting hands on with Agentic AI
I wanted to get more hands-on with Agentic AI and that’s where Linkedin’s “10-Day Learning AI Challenge” entered the picture (a very cool learning challenge, highly recommend). One of the tasks was to build an AI-powered podcast about a topic you wanted to understand better. I picked Agentic AI — and gave ChatGPT a prompt to summarize the latest research in cybersecurity and compliance, NPR-style (I love Up First, please support your local media).
But the magic wasn’t just in the podcast. Did you know you can chat directly to ChatGPT? There is a voice mode feature where you can actually talk back and forth with the technology. So as ChatGPT was summarizing all of the material via the podcast, I could pause the summary, ask follow up questions, challenge ideas, and push deeper into the material in real time. It didn’t just present the information, I was able to steer the learning experience. I walked away from that experience with a stronger grasp on both what Agentic AI is and the kind of practical questions I needed to start asking as a leader:
How are real companies actually using Agentic AI today — in GRC, Security, and other industries?
What risks could this introduce into the environments I’m responsible for?
What questions should I be asking my team, our vendors, and myself to stay ahead?
Are our current controls and risk assessments even built to catch these use cases?
Most importantly: How can this help my teams and I? Because even though we are in the Wild Wild West of AI (shoutout to Will Smith and Sisqo) and things are scary, as a professional and citizen, I can’t afford to NOT know how this will affect me. Because it will whether I want it to or not.
So I did an experiment: Using ChatGPT and a free Notion account as my sandbox, I built an agentic workflow to automate filling out vendor security questionnaires - a real time-consuming task for my team members no matter how good your documentation is. Here’s what I did:
Created a golden library in Notion filled with dummy responses (golden library = verified responses to common questions you may be asked by a vendor)
Prompted ChatGPT to act like an assistant that: 1) Parses questionnaires, 2) Retrieves matching answers, 3) Suggest responses, 4) Flags questions that need escalation
It walked me through each step like a play-by-play. While ChatGPT couldn’t fully automate the workflow on its own (agent mode it isn’t fully flushed out), it mapped out each step clearly, giving me a solid blueprint for how a real agentic system could be designed and deployed.
From Concept to Reality: Agentic AI is Already Here
At first, I thought Agentic AI was just a concept; something theoretical or experimental, not something I needed to worry about in another year or so (because honestly, the list of things to worry about in cyber and grc is long af). But I was wrong. These agents are already out in the wild. They’re being embedded in tools you and I use, surfacing in places like vendor platforms, and quietly solving real-world challenges in industries like healthcare, finance, and tech.
How Agentic AI Can Evolve GRC
If you’ve been in this field awhile, you know that GRC teams are often stuck reacting to engineering and regulation cycles instead of learning the charge. But agentic AI could change that. Here are a few possibilities I am excited about:
Compliance Automation: Agents could track and report on who’s completed training (and who hasn’t), which vendors are overdue on evidence uploads, which new hires haven’t met onboarding security requirements.
Risk Reporting: Instead of spending hours manually updating a dashboard for that monthly meeting, agents could gather real-time risk data, enrich it with context, and present it in a format tailored to various stakeholders (think auditors, board members, cross-collabs in leadership.
Audit Prep: Imagine an AI assistant that can collect system artifacts across environments, check for time stamp integrity, and flag documentation gaps before you even show an auditor your controls?
How Agentic AI Can Boost SecOps
Security Operations is another high-friction, high-fatigue area where agents could become powerful teammates for you. Here’s some ways I can see that playing out:
Vulnerability Management: Agents can scan code and infrastructure proactively, emulate attack behavior through dynamic testing, and recommend or even implement remediations
Incident Response: Agents can follow incident response playbooks automatically, enrich your alerts with relevant context, isolate impacted systems in real time, and coordinate communication across key stakeholders - all without waiting for your team member to initiate.
Threat Hunting: Think of your agents as tireless analysts that can sift through logs, spot unusual patterns, and monitor endpoint activity; 24 hours a day, 7 days a week, without the burnout that comes with that kind of work.
Challenges You Should Be Aware of As a Cybersecurity and Compliance Leader
I’m not going to lie, everything I just shared above sounds exciting as heck for the future. But I am not going to sugarcoat the risks and concerns that this technology brings to an already thank-less, complex, and difficult field. Here are some of the concerns I am already tracking on:
The concept of Excessive Agency is all about agents, with too many permissions, acting beyond their intended scope. In plain terms, imagine your agent going rouge and taking action like deleting a playbook or even stopping something in production. In a 2023, a CISA-led exercise showed how poor coordination, coupled with AI agents thrown in the mix without shared context or oversight, allowed a simulated breach to go undetected for months.
Credential Sprawl: Today, most companies already struggle to track where credentials live, who has access, and what secrets are still active. Between hardcoded credentials in codebases, service accounts with persistent tokens, and platforms requiring API keys for every integration, it’s not unusual to find hundreds of access tokens floating around in an organization; often unmanaged and perhaps rotated occasionally. Now imagine this problem multiplied across a growing fleet of AI agents. Each agent will need its own set of credentials to do its job and without proper guardrails you could be expanding your own attack surface in ways that can quickly spiral out of control. That’s where privileged access management, credential lifecycle tracking, and other controls will be key for you.
The same tool that will help you can be used against you. Sometimes the very tools we build to empower our teams or organizations can become liabilities. In the context of agentic AI, the same capabilities we rely on to drive efficiency - such as automating testing, simulating attacks, scanning code, or making decisions - can be exploited maliciously to cause harm. For example, an AI agent built to autonomously conduct penetration testing could, in the wrong hands, become an always on digital attacker. Or what about an agent built to be an internal risk analyst that can be tricked into exfiltrating business logic or manipulating risk scores. (*bites nails*)
This last concern I am tracking is a doozy because bay-bee, Accountability is already messy in the corporate world. One of the biggest challenges leaders already face today is: Who’s responsible when a system makes a bad call? Even before AI entered the group chat, teams are struggling with accountability: A script deletes critical data - was it the engineer who wrote it or the person who approved the deployment? A risk assessment misses something major - is it the tool’s fault or the human for trusting the output? With agentic AI, the system doesn’t just follow instructions, it reasons, acts, and learns on its own. So what happens when an agent accidentally deletes a user database to “optimize performance”? Or a vendor’s embedded agent takes an action that violates your internal policy but you didn’t even know that agent had the capability? Accountability gaps don’t just create operational confusion, it creates legal, ethical, and reputational risk.
In security and compliance, we’re already held to high standards of oversight and audibility. But with agentic systems we’re now introducing actors that can make decisions, execute changes, and learn from feedback without direct human intervention every step of the way. Idk about you but learning that made me take a step back (and say wtf under my breath).
What Needs to Happen
To help me prepare for this inevitable shift, I plan to:
Investigate and push for transparency from vendors: What can your agent do? What access does it have? What decisions can it make without approval? How can I reverse things?
Collaborate with Legal to understand if our contracts and SLA’s need to be updated to include clear language around agent behavior, fail safes, and rollback responsibilities.
Update and define escalation paths internally. If an agent goes off the wall, who will shut it down and how quickly can we recover?
Design and influence some accountability layers into our architecture. This includes any decision logs, playbooks, RACI’s, and permission boundaries that limit the scope of what action an agent can do.
Last, but never least, educate stakeholders that AI agents isn’t the elder wand (harry potter is life) we think it is. It’s a great tool and system, and like all systems, they can fail.
So What Can You Do Today as a Leader To Prepare?
Here’s what I’ve been doing and what you can try too:
Learn Actively: I’ve committed at least 2 hours weekly, for several weeks, to study. This includes podcasts, papers, Youtube, and most importantly hands on experiments.
Explore Your Tools: Get clear on the tools you have implemented today, if you have not already, and investigate if anything you already use has agentic AI embedded, available, or on the roadmap. Remember, this all started because a tool I know my team uses today reference it. If one is doing it, best believe others are too.
Stay Informed: I track publications from various trusted sources like: Cloud Security Alliance, OWASP, Verizon DBIR, and even my Linkedin feed.
Get your hands dirty and experiment: Now this one is scary because I, too, had no idea what I was doing. But I leaned into my Sim trait of curiosity and just asked questions to see where it led me. I tried OpenAi’s new agent mode in real scenarios and figured out what worked, what didn’t and what governance would require. The point isn’t to master everything overnight - its to build small steps towards fluency. This will allow us to lead the conversations and not just react to them.
Here’s a checklist you can adopt this week:
Pick a day and time you can commit to for the month of October. Dedicate some time to learn more about agentic AI (or whatever topic you are interested in that impact your day to day).
In your next vendor review, ask your vendor directly if agentic AI is part of their product (or on the roadmap!)
Create an internal survey and share with your teams to uncover shadow experiments
Choose one tool in your stack and map where agentic AI could already exist
Follow 2-3 trusted publications to track new emerging technology developments monthly.
My Question To You
I’m still learning but I am leaning in to this wave of emerging technology. What’s a new or emerging technology you’ve been brushing off but are secretly curious about?