Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg  – Partners interview

The Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg is a leading research centre focused on trustworthy and secure information and communication technologies. Structured into 18 research groups, SnT covers diverse domains such as cybersecurity, trustworthy software and systems, AI/ML, space systems, FinTech, and RegTech. The centre fosters interdisciplinary research and actively collaborates with industry through its Partnership Programme, which includes over 70 industrial partners as of 2024. This strong academic-industry cooperation enables SnT to address real-world challenges and drive impactful innovation in ICT and AI. Among its research groups, the SerVal group plays a central role in advancing methods and tools for the design, development, and maintenance of trustworthy AI-based systems, contributing to both academic progress and practical solutions for industrial stakeholders. Through its collaborative model, SnT is a key contributor to the digital transformation of society. 

What is your organization’s role in the project? What unique contribution does it bring to the team?

SnT, and more specifically the SerVal research group, brings a distinctive expertise in the development of technical AI sandboxes, dedicated environments that offer AI practitioners practical tools and ready-to-use solutions to assess various dimensions of AI trustworthiness, including robustness, fairness, and explainability. SerVal has extensive experience in designing methodologies and mechanisms that prevent security threats from compromising AI systems, while also mitigating their impact and improving system generalization. Their approach does not stop at robust-by-design development, it also lays the groundwork for continuous improvement. Since both AI systems and potential threats evolve over time, SerVal emphasizes the importance of continuous monitoring and dynamic refinement of protection mechanisms to ensure long-term security and resilience. This forward-looking perspective enables the deployment of adaptive, trustworthy AI systems in real-world settings and supports the broader mission of building secure and reliable AI technologies. 

How do you think CERTAIN will contribute to the Artificial Intelligence landscape in Europe?

CERTAIN is dedicated to developing a robust technical framework that facilitates the certification of AI systems by supporting AI developers, certification bodies, and AI laboratories in evaluating robustness and security. This framework will include standardized certification templates, streamlining evaluation processes and fostering collaboration across stakeholders. At its foundation, CERTAIN will deliver a flexible, evidence-based methodology for designing context-aware toolboxes—tools that consider sector-specific and system-specific threat landscapes. From a legal standpoint, the project ensures full compliance with essential regulatory requirements concerning AI robustness, including those outlined in the proposed AI Act, Cyber Resilience Act, and relevant data protection laws. Moreover, CERTAIN will proactively identify and address regulatory challenges that may hinder framework adoption, particularly those involving the use of personal and non-personal data during technical testing.