CERTAIN Maps the EU AI Rulebook: A New Foundation for Trustworthy AI in Europe
How do you build AI systems that are innovative and legally sound in Europe? With the EU AI Act now in force, this question is more urgent than ever. To help answer it, the CERTAIN Project has released a key milestone: Deliverable D3.1 – a comprehensive legal analysis and regulatory map of AI in the EU.
Rather than focusing on a single law, this work takes a big-picture view of Europe’s AI governance landscape. The result is a clear, structured overview of how the EU AI Act fits together with data protection, cybersecurity, digital services, and copyright rules – and what all this means in practice for AI developers, deployers, and data holders.
From abstract law to practical reality
One of the biggest challenges with AI regulation is complexity. The EU AI Act introduces a risk-based approach, but real-world AI systems rarely exist in isolation. They depend on data, generate content, and operate within technical infrastructures that are regulated by multiple overlapping laws.
Deliverable D3.1 tackles this head-on by organising EU AI regulation into four interconnected domains: AI systems, data, content, and technical infrastructure. This makes it easier to see how legal obligations intersect across the AI lifecycle – from training data and transparency obligations to cybersecurity and product safety.
Who is responsible for what?
The report also brings clarity to the different roles in the AI value chain. Providers, deployers, importers, distributors, product manufacturers, and authorised representatives all face different responsibilities under the AI Act. CERTAIN’s legal mapping shows how these obligations change depending on an AI system’s risk level, helping organisations understand where accountability lies and how roles can overlap in practice.
Navigating Uncertainty in AI Regulation
While the AI Act is now law, many of its implementation tools, such as harmonised standards, codes of practice, and certification schemes, are still under development. This creates uncertainty for organisations that want to comply early.
At the same time, initiatives such as the Digital Omnibus highlight that implementation timelines may shift. CERTAIN will remain closely attuned to these developments, while maintaining a clear message: organisations should continue preparing for compliance, as the core principles and obligations of the AI Act are here to stay.
Deliverable D3.1 does not shy away from these gaps and challenges. Instead, it highlights where legal clarity already exists, where uncertainty remains, and how emerging compliance mechanisms can help bridge the transition period. This forward-looking perspective is essential for building AI systems that are not only compliant today, but future-proof.
A cornerstone for CERTAIN’s next steps
This legal analysis is the foundation for CERTAIN’s technical and ethical work. By translating complex regulation into a clear compliance baseline, it enables the project to develop practical tools, certification pathways, and guidelines that support ethical, auditable, and regulation-ready AI across Europe.
As the EU AI Act reshapes the global AI landscape, CERTAIN is positioning itself at the intersection of law, technology, and trust – helping turn regulation from a barrier into a catalyst for responsible AI innovation.
This article was written by Olena Denysenko, Analyst at University of Tartu.