This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

ETSI Establish a Baseline Cybersecurity Standard for AI

Development and deployment of AI presents cybersecurity challenges that conventional security frameworks alone do not fully address. With such widespread adoption of AI across industries, the need for robust cybersecurity measures specific to AI has become essential. The ETSI EN 304 223 standard responds to this need by defining baseline cybersecurity requirements for AI implementations, laying out high-level principles and provisions applicable throughout the AI lifecycle.

By structuring these requirements into five distinct phases, namely secure design, secure development, secure deployment, secure maintenance and secure end-of-life, the standard looks to help ensure that security is not an afterthought but an integral aspect of AI systems from conception to decommissioning.

In the secure design phase, the standard calls for explicit security training, threat modelling and risk management tailored to AI characteristics, while secure development incorporates guidance for securing the data, models and supply chains, with the effectiveness being evaluated through independent security testing. Secure deployment and maintenance emphasise communication practices, operational security controls and continuous logging/monitoring. Finally, the end-of-life phase helps ensure AI systems and related data are decommissioned responsibly.

A notable requirement within the standard (Provision 5.2.5-2.1) is the expectation that the large language models, applications and associated systems undergo security testing by independent testers with the skills relevant to the AI technology being utilised. This requirement reflects the reality that large language models introduce distinct attack surfaces beyond conventional penetration testing approaches and recognises that effective AI security testing demands domain-specific expertise, with independent knowledgeable security testers being well positioned to simulate the required real-world adversarial interactions.

As AI becomes ever more central to business strategy, adopting recognised baseline standards will help organisations demonstrate security due diligence, strengthen internal assurance processes, support the procurement of secure third-party AI solutions and improve incident readiness.

For more information visit:

https://www.intertek.com/ai/red-teaming/

Special focus on the cybersecurity of Artificial Intelligence (AI) is important due to its distinct differences compared to traditional software - ETSI

Sign up to receive our Assurance in Action insights: Subscribe now!

Tags

cyber threats, nta, ai, cyber security, cybersecurity, technology, due diligence, standards, ai red teaming, penetration testing