The BOSCO Ruling: A Historical Precedent for Algorithmic Transparency
A citizen victory that forces the State to open its algorithms
In September 2025, the Spanish Supreme Court issued a ruling that can be considered foundational in the history of public artificial intelligence: the State is obliged to reveal the source code of the BOSCO algorithm, used to decide who accesses the social electricity bonus. Behind this decision is not only a legal case won by the Civio Foundation, but a key precedent in terms of algorithmic transparency in public administration.
The ruling establishes that algorithmic transparency is a constitutional right, and that when an automated system has a direct impact on people, its operation must be auditable. The excuse of “intellectual property” or alleged “national security” risks are no longer enough. What was once opacity must now be openness.
Algorithms that decide, rights that dilute
For years, the Spanish Government defended that BOSCO was a simple calculator. However, the Spanish Data Protection Agency and later the Supreme Court confirmed the opposite: BOSCO made automatic decisions with legal effects, without human review. Thousands of legitimate aid applications were denied by an unexplained system. And that has a name: violation of rights.
The ruling not only forces the delivery of the source code but also sets key principles: that transparency is a security measure, that public software cannot be shielded by copyright, and that judicial control over automated decisions is only possible if the algorithm is auditable. In other words: without algorithmic transparency, there is no justice.
And now what? A new model for public AI
Spain thus becomes one of the first countries in the world to judicially enshrine the obligation of transparency over public algorithms. The decision aligns national legislation with the new European Artificial Intelligence Regulation and reinforces initiatives such as algorithm registries in Barcelona and Valencia.
This resolution opens the door for other opaque systems like LEXNET or MINERVA—used in the judicial field—to also be held accountable. And even more: the Administration is now morally and legally forced to design its systems with “open source” criteria, external review, and citizen participation from their origin.
Europe is moving: The Netherlands and France set the example
The BOSCO case is not alone. In the Netherlands, following the “Toeslagenaffaire” scandal—an algorithmic system that wrongly identified thousands of families as fraudulent—the Dutch Government created a Public Register of Algorithms, detailing the systems used by the administration, their purpose, and the level of human intervention.
France, for its part, has incorporated algorithmic transparency requirements into its administrative code since 2016, requiring that any automated administration decision be documented and justified. This measure has already allowed citizen audits of algorithms used in education and employment.
Towards a public AI based on rights
What these countries—and now Spain—are doing is building the foundations for an institutional artificial intelligence that is not only efficient but also ethical, explainable, and legitimate. It’s not about slowing down innovation, but about ensuring that it occurs within a framework of guarantees.
The creation of the European Centre for Algorithmic Transparency (ECAT), based in Seville, reinforces this approach at a continental level. Its objective: to ensure that the algorithms used in Europe comply with principles of traceability, human supervision, and non-discrimination.
The message is clear: public AI cannot operate as a black box. We need institutionalized “human-in-the-loop,” open registries, and common standards that allow trust in the systems that manage our rights.
It’s not just code: it’s power
The battle for access to the BOSCO code is not a technicality. It is a struggle for democratic control in the era of algorithms. The technological “black box” can no longer be a refuge for unjustified public decisions. The precedent is clear: if an algorithm decides on your rights, you must have the right to understand it.
The ruling on BOSCO does not occur in a vacuum. It coincides with the approval of the AI Act, the first European regulation that governs the use of artificial intelligence and considers systems applied in public administration to decide on social rights as “high risk.” This law requires transparency, explainability, and human supervision, but the Spanish Supreme Court has gone further: it has transformed those principles into an enforceable right.
The ruling is not just a technical issue of access to code, but a struggle for democratic control in the era of algorithms. Spain ikke only complies with the AI Act but also places itself at the European forefront, making a clear message: We must, from Europe, travel a path—steady but sure—of necessary regulation to move towards algorithmic transparency for public, fair, and democratic artificial intelligence.
My Tech Plan is an innovation and technology events agency that helps companies in the tech sector attract talent, gain visibility, and accelerate their growth. We do this through: 💠 Customized hackathons, meetups, conferences, and corporate events that reinforce employer branding, position the brand, and generate strategic connections. 💠 Communication plans with content focused on building and fostering community loyalty, amplifying the impact of the event before, during, and after.