By the end of 2026, 40% of enterprise applications will be integrated with task-specific AI agents, up from less than 5% in 2025, according to Gartner. Most organizations have no idea what those agents are doing once they are deployed.
That is not an alarmist claim: It is the data. And the gap between how fast AI is being deployed and how well it is being secured is widening every quarter in ways that most organizations have not yet fully reckoned with.
The visibility problem nobody wants to admit
A 2026 Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. Nearly half of all agents run without any security oversight or logging, according to Gravitee.
The numbers get more uncomfortable from there. Only 14.4% of organizations send AI agents to production with full security or IT approval. Yet 82% of executives report confidence that their existing policies protect against unauthorized agent actions, according to Gravitee survey. That gap between confidence and control is the defining problem of enterprise AI security right now.
More AI:
- Micron sits at the center of a red-hot chip rally
- IBM CEO sends blunt message on AI and quantum computing
- Anthropic CEO makes shocking admission about AI
Ofer Klein, co-founder and CEO of Reco, a leader in SaaS and AI security, told TheStreet the problem runs deeper than most security leaders want to acknowledge. “Most organizations don’t know what AI is actually running inside their business. That’s a structural problem, not a gap in intention,” he said. “An employee connects an AI agent to Salesforce on a Tuesday. By Thursday, that agent has access to customer data, is sending emails on someone’s behalf, and nobody in security knows it exists.”
This is not simply a matter of poor oversight. AI tools are increasingly embedded into day-to-day workflows, often without formal approval, creating an environment where security teams are reacting after the fact rather than managing deployments in real time.
The financial cost of looking the other way
The data on what shadow AI costs organizations is striking. IBM research found that shadow AI incidents add an average of $670,000 to breach costs compared to standard incidents, according to IBM. That is not the total cost of a breach. That is the premium for having unknown AI in the environment when something goes wrong.
The reason that gap is so large comes down to detection speed. When a breach occurs through a known, managed tool, security teams have logs. When it occurs through an unsanctioned AI integration, the detection clock does not start until long after the damage is done. Gartner projects that AI-related legal claims will exceed 2,000 by the end of 2026 due to insufficient risk guardrails, according to Atlan.
When AI agents become the attack surface
The threat is not only internal. AI agents have become a new entry point for external attackers. In August 2025, threat actor UNC6395 used stolen OAuth tokens from a Salesforce integration to access customer environments across more than 700 organizations. The attacker needed no exploit and no phishing. The activity looked legitimate because it came from a trusted SaaS connection, according to Reco. That pattern repeated in April 2026, when attackers compromised Vercel by first breaching Context.ai, a third-party AI tool that held OAuth access to a Vercel employee’s Google Workspace account. From there, they pivoted directly into Vercel’s environment. No credentials were stolen directly. The access came through a trusted connection that was never being monitored.
Related: Elon Musk has a shocking message on AI and robots
In March 2026, the Alibaba-affiliated AI agent ROME autonomously hijacked GPU resources for crypto mining and opened a hidden network backdoor during a reinforcement learning training run, without any instruction to do so. The behavior only surfaced when Alibaba Cloud’s firewall flagged unusual traffic patterns, according to The Block. These are not edge cases. 88% of organizations reported confirmed or suspected AI agent security incidents in the last year.
Why autonomous agents change the security model entirely
Traditional cybersecurity was built around a simple premise: track the human, authenticate the access, log the activity. AI agents do not fit that model. They operate continuously, across multiple systems, often using credentials tied to human users but acting without direct oversight.
Klein explained that the real danger lies not in individual agents but in how they connect. “The real risk is the chain. One agent connects to your CRM. A second connects to your email. A third connects to your document store. Each one was approved in isolation. Together, they form a data path that no one designed and no one is watching,” he told TheStreet.
That interconnected behavior introduces a category of risk that did not previously exist. Individually, each integration may appear manageable. Collectively, they create unintended pathways through which data flows across systems without centralized control. Security is no longer just about protecting systems from external threats. It is about understanding how internal systems interact in ways that were never explicitly designed. SaaS platforms now give non-technical employees the tools to build and deploy their own agents with full enterprise data access, often without any security review, compounding the problem faster than central teams can track, according to security analyst Francis Odum.
Key statistics on enterprise AI security in 2026:
- 40% of enterprise applications will integrate task-specific AI agents by end of 2026, up from less than 5% in 2025, according to Gartner
- Only 24.4% of organizations have full visibility into which AI agents are communicating with each other, according to Gravitee’s 2026 State of AI Agent Security survey
- Shadow AI incidents add an average of $670,000 to breach costs, according to IBM’s 2025 Cost of Data Breach Report
- 88% of organizations reported confirmed or suspected AI agent security incidents in the past year, according to Gravitee
- Organizations enforcing least-privilege access for AI agents report a 17% incident rate versus 76% for those without it, according to Teleport research

Termmee/Getty Images
Scale makes the problem exponentially harder
The challenge compounds as AI deployments grow. A single tool with defined permissions can be monitored. But as organizations deploy dozens or hundreds of tools, each with its own access scope and integrations, the problem becomes different in kind, not just degree.
Klein described this clearly: “Hundreds of AI tools, each with their own OAuth connections, their own access scopes, their own agent configurations, compounds into a problem of a different order,” he told TheStreet. “At that scale, a single misconfigured agent can propagate bad data or a bad decision across an entire stack before anyone catches it.”
This is particularly visible in high-pressure, real-time environments. The FIFA World Cup 2026, being hosted across 16 North American cities, is one of the most AI-dependent sporting events ever staged. It involves thousands of interconnected systems coordinating security, logistics, ticketing, and fan engagement simultaneously. In that environment, the margin for error is minimal and the consequences of a misconfiguration can cascade faster than any team can respond. The same dynamics exist inside every major enterprise. The consequences are just less visible.
The average enterprise now manages 37 deployed AI agents, a number that grows every quarter as individual teams spin up automation without central review, according to Gravitee’s survey. Each undiscovered agent is an unmapped access path.
The path forward for enterprise AI security
The regulatory environment is beginning to catch up. NIST launched a formal AI Agent Standards Initiative on February 17, 2026, the first government-level standards effort specifically targeting AI agent security, according to NIST. The EU AI Act, enforced from August 2026, classifies certain autonomous AI systems as high-risk and mandates specific oversight requirements.
But regulation will not close the gap on its own. Organizations need continuous visibility into not just what systems are deployed, but how they behave, what data they access, and how they interact across the environment.
Klein’s message is direct. “The companies that get this right will be those that treat AI security as core infrastructure from the start, not something to revisit after deployment,” he told TheStreet. The race is no longer just about how fast AI can be adopted. It is about whether organizations can secure it at the same speed.
Related: Visa CEO sends blunt message on AI and blockchain
#adoption #accelerating #faster #security #layer