Business Insider

Cybersecurity needs a rethink in the age of agentic artificial intelligence

Pinterest LinkedIn Tumblr
STOCK PHOTO | Image by Gerd Altmann from Pixabay

By Asha Hemrajani

ARTIFICIAL INTELLIGENCE (AI) has entered a new phase. It is shifting from passive tools to autonomous agents that can plan and act across digital and physical systems, often for extended periods and in concert with other agents. Their interacting and collaborating capabilities are scaling quickly, allowing them to perform increasingly complex tasks with minimal human input, across sectors such as banking, e-commerce, and logistics.

These systems are improving efficiency, but they also raise the stakes for cybersecurity as many of them were not built with security in mind.

Agentic AI systems can be attacked. As they interact with enterprise systems, other agents, and humans, the cybersecurity attack surface expands, exposing them to new threats such as impersonation attacks, prompt injections and data exfiltration.

The boundaries between appropriate autonomous use and deliberate misuse are blurring as enterprises permit AI agents to use apps on users’ behalf more frequently. Malicious agents can also take advantage of the same interfaces that authentic agents employ.

Safeguarding agentic AI in enterprise systems is therefore emerging as one of the defining upcoming cybersecurity challenges.

Recent state-linked campaigns such as UNC3886, reported in Singapore, revealed how adversaries try to exploit trusted enterprise platforms to gain persistent access. Similar risks will arise as agentic systems become more deeply integrated into operations. Protecting them is no longer optional; it is a strategic imperative.

CYBERSECURITY AS A STRATEGIC ENABLERTraditional cybersecurity frameworks were designed for systems with predictable behaviors. Agentic AI breaks that predictability. It learns, adapts, and operates with varying degrees of autonomy, creating new layers of uncertainty that static defenses cannot contain.

For governments and large enterprises operating critical infrastructure, this shift requires a fundamental change in mindset. As agentic AI becomes embedded in decision-making, operations, and citizen services, cybersecurity must evolve from a defensive function to a strategic enabler of trusted autonomy.

Purposeful and appropriate agentic AI deployment is critical. The right safeguards are needed for such deployments. Deeper testing of how AI systems interact, along with clear human oversight and escalation management is essential, especially in critical infrastructure.

Security must now be adaptive, context aware, and integrated into business and operational strategy. It is no longer just about preventing attacks. It is about maintaining the trustworthiness of autonomous systems that are starting to influence decisions at national and enterprise scale.

The distinction between securing AI deployment and leveraging AI in cybersecurity is also one that needs to be recognized. Guardrails for this nascent field are still in a formative phase, but ethical and practical implementation realities are important pieces of the puzzle that cannot be ignored.

Fundamental signposts in cybersecurity also need revisits and rethinks. Identity, data and attack surfaces take on different complexions that are still evolving, and there are contradictory philosophies in concepts such as Zero Trust that need adaptation to the growing impact of AI.

REFRAMING DIGITAL RISK GOVERNANCEGovernance frameworks must evolve alongside technology. Two issues are becoming urgent.

First, the spectrum of autonomy must be understood. Agentic behavior is not a binary state. Treating a basic automation script as equivalent to a self-directing system results in misplaced controls and uneven risk management. Oversight and safeguards should correspond to degrees of autonomy, not broad labels.

Second, accountability must be redefined. If an agentic AI system executes an action that is harmful, who should bear responsibility? Without clear boundaries, legal and ethical gaps will persist, and adversaries may exploit them. Boards, chief information security officers, and regulators need shared accountability models that reflect how agentic AI systems work.

These questions are already visible in data governance disputes, algorithmic bias cases, and AI incidents where AI systems have behaved in unexpected ways. Unless accountability frameworks get better defined, accountability gaps will widen.

SECURING AGENTIC AI IN  CRITICAL INFRASTRUCTUREAgentic AI deployment in critical infrastructure entities raises unique risks. Agentic AI promises gains in efficiency and resilience, but its vulnerabilities could cause cascading disruptions if compromised.

Protecting these systems requires new approaches to securing AI apps and agents.

It is essential that critical infrastructure entities retain control as they adopt more autonomous AI-driven systems.

Hence, the focus needs to be on detection and stopping attacks (such as direct and indirect prompt injection, data poisoning) on models/AI apps and agentic-AI workflows. Policy control for AI use such as blocking risky requests, data-leak prevention for AI apps, and detecting unsanctioned AI agents in use, among others, are also essential.

Equally important is ensuring resilience in agentic AI systems by governing the non-human identities (NHIs), the digital identities backbone of agentic AI. Enterprises will need to exercise proper oversight of NHIs in terms of access control, guardrails, and traceability.

CONVENING FOR RESILIENCE IN AGENTIC AINo single government, enterprise, or regulator can address these challenges on their own. For agentic AI systems to be safe and resilient, collaboration across borders and sectors is needed.

Across ASEAN, economies like Singapore, Malaysia, and the Philippines are building stronger partnerships between government, industry, and academia to prepare for the next wave of AI-driven threats. Platforms such as GovWare in Singapore play an important role in connecting regional voices and advancing dialogue on shared cybersecurity challenges that affect the entire ASEAN digital ecosystem.

The real value of such forums lies in bringing together policymakers, enterprises and innovators to address accountability, interoperability and resilience together.

BUILDING TRUST IN THE AGE OF AUTONOMYAs agentic AI becomes part of daily operations, the real challenge is not only technical but human. Trust will depend on the people who design, deploy, and oversee these systems, and on their ability to step in when things go wrong.

Events like GovWare help translate complex AI and cybersecurity issues into shared understanding and practical collaboration. They remind us that resilience is built through people working together, not machines acting alone.

Ultimately, technology is only as trustworthy as the intent and integrity of those who create and use it. A secure digital future will depend on our collective willingness to stay curious, accountable, and connected, because trust is built by people, not algorithms.

Asha Hemrajani is the  senior fellow at the S. Rajaratnam School of International Studies at Nanyang Technological University, and Ian Monteiro, CEO and founder of Image Engine.