Luca Nannini

Proposal(s) title:
  • Technical Contributions to WG2 & WG4's Draft Standards through Annex ZA and hEN Checklists
Proposal(s) topic:

Artificial Intelligence

Impact on SMEs:

Impact on SMEs (7th Open Call)
I believe that this work helps reduce compliance uncertainty and costs for SMEs. Technical coherence across the standards framework simplifies implementation for organizations with limited resources. My contributions to the QMS standard particularly focus on ensuring requirements are scalable and accessible to SMEs developing AI systems (i.e. being able to show SMEs how standard interrelating is valuable and would solve burdens related to understanding how requirements across different standards flow).

Impact on society:

Impact on society (7th Open Call)
The work on the AI Trustworthiness Framework (particularly enhancing requirements for transparency and human oversight) ensures standards effectively support the protection of fundamental rights as required by the AI Act. This strengthens societal safeguards against potential harms from AI systems.


Value of Research

My fellowship addresses three critical gaps in the European AI standardization landscape: The first gap concerns the harmonisation of Documentation Development, as there is an urgent need for technical documentation (Annex ZA, HAS checklists) to connect developing standards with AI Act requirements following the M/593 request. Without this work, standards risk delayed OJEU citation, creating regulatory uncertainty. I've worked on developing preliminary harmonization documents for JT021008 (Trustworthiness), JT021039 (QMS), and JT021024 (Risk Management). The second gap is related to cross-Standard Technical Coherence. As multiple AI standards are developed simultaneously, it creates potential inconsistencies in terminology, requirements, and implementation approaches. I've created mapping documents highlighting interconnections between standards, particularly focusing on how QMS requirements interface with other M/593 standards, to ensure a coherent framework. The third gap focuses on the alignment with EU AI Act Articles, as technical specifications in draft standards must precisely align with AI Act articles to support regulatory compliance. I have contributed targeted technical refinements to clauses 6.4 (transparency) and 6.5 (human oversight) in the Trustworthiness Framework to strengthen alignment with Articles 13 and 14 of the AI Act.

Luca Nannini
Full Name: Luca Nannini
Title & Organisation Name: Piccadilly Labs
Country: Spain
Linkedin
Standards Development Organisation: