Op-Ed: The Perils of AI-Driven Policing in Turkey

August 27, 2025
by Yasir Gökçe, published on 27 August 2025
Op-Ed: The Perils of AI-Driven Policing in Turkey

In recent years, Turkey has embarked on a sweeping integration of artificial intelligence (AI) into law enforcement and justice systems. From automated traffic enforcement to predictive criminal analytics, these systems promise efficiency—but at what cost to civil liberties, transparency, and the rule of law?

During the 2025 Eid holiday, Turkey deployed Trafidar, an AI-powered modular radar system that autonomously monitors vehicle speeds, identifies license plates, and issues fines—all without human oversight. In Düzce alone, over five days, 224 drivers were fined a staggering 526,645 lira under the surveillance of this technology. While touted as a road-safety measure, the rollout sparked widespread public backlash. Drivers complained of sudden, opaque speed-limit changes and a sense of being entrapped by an unfeeling system. This case illustrates a broader problem: when predictive or automated systems make decisions without human judgment, accountability evaporates.

Turkey’s Urban Safety Management System, known as KGYS, integrates video surveillance, license-plate recognition, and data links across municipalities. In 2025, the government vastly expanded its surveillance capacity with the procurement of thousands of facial-recognition cameras. More than 3,500 such cameras were deployed across 30 provinces, followed by tenders for 13,000 additional devices and billions of lira in spending. Reportedly, police officers will be supplied with body-worn cameras equipped with facial recognition, covering Istanbul’s streets with near-omnipresent surveillance. The scale and lack of legal safeguards are deeply troubling. Reportedly, state authorities have already used facial recognition to identify protesters—sometimes solely from a photograph of individuals holding a banner—leading to arrests and detentions with scant evidence of wrongdoing.

The ASENA system—Analysis System Narcotics Network—was developed as an AI-powered assistant to Turkish counter-narcotics units. Since 2021, it has processed roughly 300 million queries, flagged around 9,020 risky situations, and contributed to uncovering approximately 3,795 criminal cases, including seizing large quantities of illicit drugs and weapons. On the surface, ASENA represents a tour de force of technological innovation and effectiveness. Yet, the underlying issue remains: operation depends on vast troves of sensitive personal data. Whether safeguards against misuse or erroneous profiling exist—or hold up under pressure—remains unproven.

Even more alarming is the CBS Organizational Prediction Project, designed by Turkey’s Ministry of Justice and integrated into its judicial data system, UYAP. Ostensibly intended to improve administrative efficiency by identifying new case entries likely linked to known terrorist organizations, the system raises profound threats to the presumption of innocence. By algorithmically 'tagging' cases or individuals as potentially linked to terrorism before any human deliberation, this tool risks irrevocable reputational damage and bias in legal proceedings. The Ministry claims the tool is merely for decision support. Yet, with judges relying on it—without understanding how it works or what data it uses—the distinction between assistance and authoritative judgment becomes perilously thin.

This is not without precedent. The infamous ByLock cases after the 2016 coup attempt showed how digital tools could be wielded in arbitrary and politically motivated ways. Thousands were imprisoned simply for the alleged presence of the ByLock app on their phones, even though later investigations revealed that many had been falsely flagged, some involuntarily redirected to the app by service providers, and others accused without any credible evidence. ByLock became a technological alibi for criminalization: a mere data point standing in for proof of guilt. The parallels with AI-driven policing are striking. Just as ByLock functioned as a blunt instrument for mass persecution under the guise of technical certainty, AI systems such as CBS or facial recognition threaten to brand individuals as terrorists or criminals based on opaque, probabilistic correlations rather than concrete evidence.

The ByLock episode also revealed the malleability of Turkey’s judiciary, which accepted flawed digital evidence wholesale, sidelining legal standards in favor of political expedience. If courts could rubber-stamp detentions and convictions based on a single app, there is little reason to expect them to resist the 'scientific authority' of AI-generated predictions. Indeed, in a judicial climate already prone to arbitrariness, AI risks supercharging injustice, cloaking bias and state overreach in the credibility of algorithms.

Individually, each of these systems—Trafidar, KGYS, ASENA, CBS—can be defended as a technological advance. Together, they weave a disturbing tapestry of automation replacing judgement and suspicion replacing dignity. The erosion of legal safeguards, the opacity of systems with no avenue for redress, the amplification of bias and discrimination, and the concentration of surveillance power all point to a trajectory that entrenches authoritarian rule rather than strengthens justice.

It is also important to note that much of Turkey’s surveillance technology, particularly in AI and facial recognition, is purchased from China. This trajectory mirrors Beijing’s model of digital authoritarianism, where advanced AI tools are deployed not to protect citizens, but to monitor and control them. Turkey’s adoption of such technology indicates a deliberate move toward the Chinese path of governance, embedding technological repression into the fabric of daily life.

Turkey did not possess a functioning democratic fabric even before AI entered its policing and judiciary. What these technologies are doing now is not eroding democracy—it is tightening the grip of authoritarianism. As surveillance deepens and decision-making becomes automated, the space for dissent and justice shrinks further. Public safety cannot be purchased at the cost of liberty, nor can efficiency justify the codification of arbitrariness. If there is to be any hope of fairness in the future, a decisive recalibration is needed: one that puts human rights, judicial oversight, and transparency at the center of technological governance.

You may also like

ZERO DAY: Can a Cyberattack Start a Real War?

April 14, 2025
by Haşim Tekineş and Yasir Gökçe, published on 14 April 2025
In this video, we explore the chilling reality of cyberwarfare through the lens of Netflix’s new thriller Zero Day, starring Robert De Niro. We dive into how international law treats cyberattacks, whether they can trigger the right to self-defense, and what the rules are when the battlefield is no longer physical.

Transnational Repression: Weaponizing Financial Systems

September 25, 2024
by Dr. Yasir Gökçe, Haşim Tekineş, Mayra Russo and Sara Kezia Heinonen, published on 25 September 2024
In this episode, Dr. Yasir Gökçe, Mayra Russo, Sara Kezia Heinonen and Haşim Tekineş discussed our report published in August on weaponizing financial systems by Turkey against its opponents. The co-authors of the report shared more details over this abuse of Turkey.

How to Tackle Cyberattacks Against Space Infrastructure

April 3, 2024
by Yasir Gökçe, published on 3 April 2024
Dr. Yasir Gökçe and Brianna Bace have discussed the legal and political dimensions of cyberattacks against space infrastructures and their recent academic article on this issue.