Organisations betting on agentic AI face a tension between the technology’s promise to automate complex workflows and the practical challenge of keeping enterprise data secure. Agentic systems, autonomous software that plans and acts toward goals, can deliver major productivity gains, but they also require broad access to information across silos. That access model is forcing security teams to rethink the assumptions behind traditional identity-and-access approaches, industry observers say.

According to TechRadar, the classic perimeter of role-based permissions is ill-suited to agents that reason across datasets and infer connections not explicitly permitted by static policies. The outlet warns of “contextual privilege escalation,” where the meaning an agent derives from otherwise innocuous data becomes the avenue for exposure. The piece recommends shifting controls from rigid access lists toward mechanisms that govern intent: intent binding, dynamic authorisation, provenance tracking, human-in-the-loop checkpoints and contextual auditing.

Security specialists and compliance vendors echo that call for new controls. Vanta’s guidance highlights the opacity of many agentic systems and the attendant compliance risks under emerging regimes such as the EU AI Act. The company advises embedding explainability and decision traceability into agent designs, alongside thorough model documentation, to reduce the “black box” problem and enable audits. Forbes also cautions that agentic AI can inherit biases from training data and that ongoing governance, high-quality training sets and periodic fairness audits are essential to avoid discriminatory outcomes.

Practitioners should expect to balance enabling productivity with hardening the attack surface. Integration missteps can expand exposure, Vanta warns, while TechRadar notes that monitoring an agent’s goals and reasoning paths is as important as controlling which files it may read. Organisations are therefore exploring a layered approach: minimise the data surface agents see through fine-grained data-masking and synthetic proxies; record end-to-end provenance for each action; and require explicit human approvals for high-risk tasks.

Vendor claims about agentic capability require the same editorial caution as other product messaging. Many suppliers position agents as transformational; buyers must demand evidence of explainability, controllable scopes, and repeatable audit trails. Where vendors describe regulatory readiness, purchasers should validate those claims against documentation and independent testing rather than accept them at face value.

The governance toolkit is growing. Enterprises are piloting intent-aware policy engines that evaluate whether a requested sequence of actions matches an approved objective, rather than simply checking file-level permissions. Others adopt runtime monitors that flag anomalous multi-step behaviours for human review. Industry guidance suggests combining technical controls with organisational measures: clear ownership of agent behaviour, regular red-teaming, and cross-functional incident playbooks that include AI-specific scenarios.

The security debate is unfolding alongside broader commercial and staffing moves in the enterprise technology sector. Infor this week named Geoff Thomas as Senior Vice President and General Manager for Asia Pacific and Japan, a move framed by the company as a bid to deepen regional customer engagement and partner networks, according to a PR Newswire release and regional reporting by ITWire and CRN Asia. Such leadership appointments underline that suppliers are reorganising to support complex, locality-specific needs that include AI governance.

Other vendor and customer developments reported this week underline the fast pace of productisation around AI and cloud services. Thomson Reuters said investment in agentic capabilities is accelerating its product innovation and plans further scaling of agentic tools, the company told investors in its full‑year results. Workday announced a Military Skills Mapper to translate veterans’ experience into civilian job skills, citing an autumn 2026 availability window. Any claims of security maturity from companies should therefore be assessed against evidence of independent certification, deployments in sensitive environments, and third-party audits; ANYbotics’ recent ISO/IEC 27001 certification was presented as such evidence in its announcement.

For risk owners, the practical takeaway is that agentic AI cannot be secured by legacy access lists alone. Effective controls will combine intent governance, explainability, provenance and human oversight, backed by third-party validation where possible. As organisations scale agent deployments, those safeguards will determine whether the promised business value arrives without excessive exposure.

Source: Noah Wire Services