For a long time, software behaved like a physical product. A company released it, customers used it, and when problems appeared, someone fixed it. Responsibility was usually clear, and the technology stayed more or less the same between updates.
That world no longer exists. Today’s digital products (“products with digital elements”) function less like standalone items and more like complex ecosystems. A single smartphone app or smart device may rely on open-source software written by volunteers across the globe, cloud services run by foreign companies, automatic updates, and even artificial intelligence models that change over time. Much of this complexity remains invisible to users, yet it directly affects privacy, safety, and even national security.
A vulnerable medical device can endanger patients. A compromised cloud service can disrupt businesses. A hacked update system can turn millions of devices into tools for cyberattacks.
The new Cyber Resilience Act (CRA) is Europe’s response to this reality. It requires that products with digital elements — from connected toys to industrial, high-critical hardware/software — are designed with security in mind and supported with updates and vulnerability handling throughout their lifetime.
Under the CRA, companies must do more than say their products are secure. They must prove it. This includes showing how vulnerabilities are handled, how software is updated, and what components are used inside the product (SBOM – Software Bill of Materials).
Why compliance is no longer just paperwork
The difficulty is that modern digital products are not static. They are built from many parts that keep changing. A software update, a new library, or a newly discovered flaw in a third-party component can instantly change the security of the whole system. As a result, traditional approaches—one off audits, static certificates, or checklist based compliance—are no longer sufficient. What matters is not just how a product looked on the day it was launched, but how secure it remains over time.
The question becomes, then: how can modern societies remain secure, safe, and fair when the digital systems they rely on are continuously changing?
To address this challenge, many companies look for artificial intelligence (AI). AI can scan documents, track vulnerabilities, and analyze huge amounts of data far faster than any human team.
These systems are good at dealing with messy, real-world information like product manuals, security reports, and vulnerability databases.
But there’s an important factor to keep in mind. The CRA is not just a technical rulebook — it is a legal framework. Which means that if a product is allowed on the market or blocked from it, regulators and companies must be able to explain the decision. Unfortunately, AI models often act as “black boxes”: they might come up with the desired outcome, but, due to the huge number of operations performed and the speed of the process, not even the researchers who created them fully understand how these neural networks interacted and how they came to that specific decision.
That is why Europe is paying growing attention to explainable and auditable AI. These are systems that do not just give answers but also show the reasoning behind them in a way that humans can understand, challenge, and verify. This matters not only for regulators, but for citizens. If digital decisions affect safety, rights, and access to markets, they must be open to scrutiny.
How intelligence and law can work together
A new approach is coming from EU-funded research initiative that are developing practical methods for managing cybersecurity compliance over time, combining traditional AI systems that look for patterns in data with formal models of law and regulation. These models describe, in precise terms, what the CRA requires. They define what counts as a secure product, what obligations manufacturers have, and what conditions must be met before a product can be trusted.
AI can extract facts from real documents — for example, whether a product has a vulnerability policy or how long it is supported. These facts are then fed into a structured legal model that applies to the rules of the CRA. The outcome is not just a binary decision, but a traceable explanation. If a product fails, the system can say: This product does not meet requirement X because it has a known vulnerability and no patching process. This is what is known as accountable automation.
The CURIUM project focuses specifically on operationalising the Cyber Resilience Act across a product’s lifecycle.
In practice, CURIUM develops methods and tools that link legal CRA requirements to concrete technical evidence, such as vulnerability handling procedures, update policies, software component inventories (SBOMs), technical documentation, post market analysis, penetration testing reports, etc. This allows compliance to be assessed and re‑assessed as products evolve, rather than treated as a one‑time certification event.
In parallel, the CUSTODES project addresses a related challenge: how to certify modern digital products composed of many interconnected components. CUSTODES develops certification approaches that explicitly account for dependencies between components, such as software libraries, and external digital building blocks. Instead of treating a product as a single static entity, these approaches focus on assessing how security properties and assurance evidence propagate across component boundaries, helping cybersecurity certificates remain meaningful even as underlying components change.
Joint work carried out within the CURIUM and CUSTODES projects has resulted in concrete research outcomes. Two companion research papers introduce OntoCRA‑NS, a practical framework that shows how explainable and auditable CRA compliance can be implemented in real systems.
The core idea is simple: neural language‑processing models analyze unstructured sources such as product documentation, SBOMs, vulnerability reports, pen-test reports, and support policies to extract CRA‑relevant facts. These facts are then passed to a rule‑based system that applies formal CRA requirements. All compliance decisions are made in a symbolic reasoning layer built on an onformaltology (OntoCRA) that encodes CRA essential and vulnerability requirements as explicit rules. Based on this logic, the system can determine whether a product qualifies for a Declaration of Conformity or should be restricted from the market. Each decision is accompanied by a clear explanation that links the outcome to specific regulatory requirements and concrete evidence. A proof‑of‑concept implementation, developed using representative products from the CURIUM project, has shown how this approach can support continuous compliance.
A new kind of digital safety
Europe is shaping a new approach to digital governance. It is one that says security, transparency, and accountability must go together. The CRA is not about slowing innovation. It is about making sure that innovation does not come at the cost of safety, privacy, and public trust.
As digital products become more complex and more powerful, the tools we use to regulate them must also evolve. Explainable and auditable AI offers a way to manage complexity without giving up control.
About Prof. dr. sc. Miroslav Baca
Prof. dr. sc. Miroslav Baca is a researcher and full professor in the field of information security and digital forensics, with long-standing academic and practical experience in cybercrime investigation and cyber defense. Now he works at University North. His work focuses on digital evidence, forensic methodologies, and the development of security frameworks that support law enforcement and judicial processes. He has led and contributed to numerous international research and education initiatives, helping to shape modern approaches to cyber investigation and training. Through teaching, publishing, and expert engagement, he promotes rigorous, practice-oriented cybersecurity and forensic science. His mission is to strengthen trust and effectiveness in digital investigations and security operations. His company – “Cyber-security ltd” is coordinating CURIUM project where Miroslav is fully engaged in the topic of CRA and AI.
About Dr. Jasmin Cosic
Dr. Jasmin Cosic, Head of Cyber R&D and Standardization at DEKRA SE, working at the intersection of cybersecurity research, certification, XAI and digital trust. Focus is on transforming advanced cyber security concepts into practical, testable, and standardized assurance frameworks for complex and connected systems. With a Ph.D. in Information Science and a background in digital forensics and critical infrastructure protection, Jasmin bridge scientific rigor with operational impact. He contributes to many international research (now in CUSTODES project) and standardization initiatives (as ENISA external expert) that advance explainable, scalable, and trustworthy cyber certification. His work is driven by the goal of making security measurable, auditable, and ready for the future.
Write new comment
Comments Terms and Guidelines
Welcome to our comments section! We encourage open discussion and look forward to your thoughts and contributions. To maintain a respectful and engaging community, please adhere to the following guidelines:
- Be respectful: Treat all commenters with respect. Avoid insults, personal attacks, or disparaging remarks about individuals or groups. Respect others’ opinions and consider your impact on the community.
- Stay on topic: Keep your comments relevant to the subject of the article or post. Off-topic discussions can detract from the purpose of the forum and may be removed.
- No spam or advertising: Please do not post spam or advertisements. This includes unsolicited promotions, links to external sites, or repetitive posts.
- Avoid offensive content: Do not post comments that are discriminatory, racist, sexually explicit, or violent. Always consider the diverse audience that might be reading your comments.
- Use polite language: Avoid using offensive or vulgar language. Comments should be suitable for a public forum with a wide-ranging audience.
By participating in our comments section, you agree to follow these rules and contribute positively to our community. Comments that fail to adhere to these guidelines may be moderated or removed, and repeat offenders may be banned from commenting.
Thank you for helping us create a friendly and inclusive space for discussion!
Read more Grow Digital Insights
Read more expert posts about Digital Innovations.
Sign up for updates
Receive the latest news and events updates by subscribing to our newsletter.
For media contacts
Are you a member of the media and would you like to contact us?
→ Get in touch with us here
Comments (0)
No comments found!