The OWASP GenAI Security Project has recently released the OWASP Top 10 for Agentic Applications for 2026, a globally peer-reviewed framework that highlights the most impactful security risks inherent in agentic AI applications and provides actionable guidance to help secure them.
The OWASP Agentic Top 10 acknowledges that agentic AI introduces fundamentally new risk vectors that challenge many long-standing assumptions regarding application behaviour, trust boundaries, and security controls. These new risks have arisen because agentic systems can utilise memory and state persistence, have the capability to strategically plan and delegate tasks, can autonomously invoke tools, operate across environments and interact with external services independently.
The OWASP Agentic Top 10 organises the risks into clear categories that reflect the recurring security weaknesses observed in agentic AI applications. These risk categories include, but are not limited to, manipulation of agent objectives, misuse and exploitation of tools and integrations, excessive or inappropriate privilege assignment, supply chain vulnerabilities, and weaknesses in how agent memory and state are managed.
With the rapidly accelerating adoption and deployment of agentic AI, the timely release of the OWASP Agentic Top 10 provides decision-makers, developers, and other security-minded professionals with a clearer view of the potential security implications. To fulfil their intended function, agentic AI applications are frequently integrated with cloud resources, provided connectivity to APIs and entrusted with access to sensitive data. Therefore, a compromised agent, having been manipulated into pursuing a malicious goal, abusing its delegated privileges or otherwise behaving unpredictably outside the intended bounds of the service, could have a significant business impact.
By utilising the OWASP Agentic Top 10 to inform architectural design decisions, development practices, and identify threat scenarios for ongoing security testing, organisations will be better equipped to manage and mitigate the unique risks posed by agentic AI applications.
For more information visit:
https://www.intertek.com/ai/red-teaming/






/Passle/5e4a7839abdfeb03584d01f6/MediaLibrary/Images/2025-02-14-20-21-26-133-67afa5c68c41fb62e9803279.png)
/Passle/5e4a7839abdfeb03584d01f6/SearchServiceImages/2026-01-15-11-33-37-427-6968d091f76c97be8fb6f838.jpg)
/Passle/5e4a7839abdfeb03584d01f6/SearchServiceImages/2026-01-07-01-31-15-507-695db763f4473ae28b6821ca.jpg)
/Passle/5e4a7839abdfeb03584d01f6/SearchServiceImages/2026-01-12-14-56-25-345-69650b99666e175f4365d62f.jpg)