AURA: AI to anticipate digital risk in minors
In a hyper-connected world, where AI transforms entire industries, one project stands out for its silent but disruptive approach: AURA (Adaptive User Risk Analyzer), a tool that puts artificial intelligence at the service of online child protection.
The system will be presented at the 1st Congress of the Ibero-American Network of Associations of Legal and Forensic Psychology and the 7th World Congress of Criminology, to be held on November 22, 23, and 24, 2025, in Valencia. Under the slogan “Cybercrime and other recent challenges in criminology and legal psychology,” the event will focus on new digital threats and innovative prevention models, such as the one proposed by AURA.
Cristian Luna, AI Solutions Architect at Evidenze, leads the development of this system. In a brief interview, he shares the technical, ethical, and strategic vision that drove the creation of AURA.
AI to protect before the harm
AURA emerged as a response to an alarming reality: the digital grooming process could last between 4 and 6 months before being detected, and more than 70% of minor victims did not report it. This combination of institutional slowness and victim silence created a protection gap that AURA sought to fill.
“We became obsessed with designing an architecture that could identify risk signals without storing conversations or directly analyzing personal information. It was key that AURA was not perceived as a surveillance tool, but as an ethical shield,” Luna explains. This design ethic, known as privacy by design, was one of the pillars of this technology.
Its core engine, the Grooming Detector v1.0 model, is based on a Transformer architecture and trained through supervised fine-tuning. It detects risk patterns in real-time, allowing for intervention before psychological or emotional harm is inflicted. The technology adapts to the context without making binary judgments, working with risk gradients and prioritizing assisted human intervention.
Cybersecurity beyond the firewall
AURA introduced a new category in cybersecurity: contextual detection of human threats. It’s no longer just about blocking technical attacks, but about anticipating invisible psychological manipulations through adaptive, sensitive models aligned with ethical principles.
“We are at the beginning of a new stage, where the focus will not only be on blocking external threats but on understanding internal human dynamics. This change in mentality—from reaction to anticipation—will be key in the near future,” Luna states. This implies that cybersecurity must evolve towards a human-centric approach, where AI does not replace but complements emotional intelligence and professional judgment.
Towards a new standard of digital prevention
The potential of AURA goes beyond a specific tool. According to its creator, his five-year vision is for this early detection approach to be integrated into educational protocols, technological regulations, and ethical standards of platforms operating with sensitive audiences.
“We want AURA to become a new standard of digital protection. But also a conversation that leads us to rethink the role of AI in the defense of fundamental rights.”
The scalability of the model also opens the door to other applications beyond childhood: detection of workplace harassment, manipulative speech in educational environments, or risk signals in vulnerable communities. The logic is the same: intervene before the context deteriorates.
AI with real impact
Projects like AURA open an essential path: one of AI with real impact, oriented towards the common good and built from the intersection of technology, ethics, and human well-being.
These types of solutions will be protagonists at the Gen AI Summit Europe, where last year Cristian, the creator of AURA and one of the AI consultants who collaborate with our training programs, had the opportunity to give a talk, Designing AI architectures oriented to compliance, with a great reception from the public. In 2026, we will have the chance to enrich the discussion and address everything from regulation to critical applications of artificial intelligence in cybersecurity, health, education, and governance.