Making Agentic AI Work in the Real World


Two years ago, ChatGPT couldn’t even tell you what day it was. These early models were frozen at their training cutoff—brilliant conversationalists who could discuss Shakespeare but not yesterday’s news.

Then came web search. Language models could suddenly fact-check themselves and pull current information. But they remained observers, not participants. They could tell you about the world but couldn’t touch it.

Today’s agentic AI represents a fundamental shift: we’ve given these systems tools. Take this scenario: you are planning a family vacation to Tokyo. A modern AI agent doesn’t just suggest an itinerary. It watches travel vlogs, cross-references museum hours with your kids’ nap schedules, books that hidden ramen shop, coordinates calendars, and handles deposits. It’s not just thinking. It’s doing.

For enterprise organizations, the stakes multiply exponentially. Beyond personal data, we’re talking about intellectual property, customer information, and company reputation. When you deploy an agent to negotiate vendor contracts, it shouldn’t have access to your M&A plans. When it’s analyzing competitor pricing, it shouldn’t be able to share your internal roadmap. When processing employee benefits, it must protect health information. When analyzing customer behavior, it must safeguard personally identifiable information from being exposed in summaries or reports.

The challenge compounds with emergent behaviors—AI agents finding creative ways to complete tasks that we never anticipated. An agent told to “reduce customer support costs” might start auto-rejecting valid claims. One tasked with “improving meeting efficiency” could begin declining important stakeholder invites.

So how do we leverage the unparalleled potential of Agentic AI, safely? This demands a new security paradigm. Authentication becomes: “Is this AI really acting on my behalf?” Authorization becomes: “What should my AI be allowed to do?” The principle of least privilege becomes critical when the actor is an AI operating at machine speed with its own problem-solving creativity. The stakes have fundamentally changed. The biggest hurdle to adoption will be how agents are given safe and secure access to enterprise resources.

Enterprise adoption of AI agents requires solving a critical new challenge: how to grant agents access to corporate resources like Google Workspace or Slack APIs without over-privileging them beyond their intended scope. Traditional OAuth implementations provide only coarse-grained permissions—typically read or read-write access at the application level—creating an all-or-nothing security model that doesn’t align with agent-specific use cases.

We are building the ability for an enterprise to implement dynamic, context-aware permission management that evaluates agent requests against both explicit policy rules and semantic analysis of the agent’s stated purpose. The system enables employees to delegate granular permissions—say allowing an agent to read emails for summarization while preventing it from deleting emails—through a consent-driven workflow that tracks and manages narrow permission lifecycles. By combining OAuth 2.1 compliance with semantic inspection, we can detect and block prohibited activities automatically, thereby keeping the user experience fluent. Critical actions would require a user’s explicit authorization to avoid mishaps.

We are doing this by extending the same principles of zero trust to Agentic AI. Whether agents are built in-house or outsourced, running on laptops, in the cloud, or in your own data centers, and whether they need access to SaaS, cloud, or on-prem applications, Cisco’s Universal Zero Trust Network (UZTNA) architecture gives you the tools you need to adopt Agentic AI for your organization.

At the heart of our UZTNA is one simple truth: we must take an identity-first approach to security. Identity transcends traditional technology boundaries, giving you the ability to establish policies at an individual level for humans, machines, services—and now, Agentic AI. With this foundation, the system can continuously monitor behaviors to distinguish ‘normal’ from ‘abnormal’ in near real time, updating policies accordingly.

Putting our UZTNA architecture in action, this means Duo Identity & Access Management (IAM) provides the authorization, Secure Access does semantic inspection so that the end user does not have to be prompted repeatedly for access permission, AI Defense is invoked to evaluate that agent actions align with its purpose, and Cisco Identity Intelligence monitors the actions and provides visibility. Together, they provide powerful protection without compromising Agentic AI adoption or experience.

More and more, we are going to see Agentic AI become an everyday reality—integrated into workstreams with the same autonomy as a human but with the speed and scale of a machine. While it represents boundless opportunities, the authorization and access challenges have to be solved. With Cisco’s UZTNA architecture, no matter who builds these agents, where they run, or what they need to get the job done, we can ensure enterprise organizations have visibility and control across identity, authentication, authorization, access, and analytics.

The future of AI is agentic—and with the right safeguards in place, it can also be secure.


We’d love to hear what you think! Ask a question and stay connected with Cisco Security on social media.

Cisco Security Social Media

LinkedIn
Facebook
Instagram
X

Share:



By admin

Deixe um comentário

O seu endereço de email não será publicado. Campos obrigatórios marcados com *