
EU Chat Control: If Privacy Is Outlawed, only Outlaws Have Privacy

- Client-side scanning proposals shift private communication from targeted investigation to broad pre-emptive monitoring.
- The technical burden lands on everyone, while determined offenders can still adapt by changing channels or adding encryption layers.
- Once scanning infrastructure exists, expanding it to new policy categories becomes easier than reversing it.
We all want children to be safe. That instinct is right. The European Unionâs âChat Controlâ proposal presents itself as a child protection measure, but at its core, it would require scanning everyoneâs private messages on their devices before they are sent. This is a form of pre-emptive monitoring. The stated aim is compassion; the architecture, however, is built on deep suspicion. The real question is not whether the goal is noble, but whether the tool is legitimate, effective, and safe at scale. When the state reaches into private correspondence pre-emptively, it shifts the default relationship between citizen and authority, where everyone is treated as guilty, instead of innocent, until proven otherwise. Such a radical shift demands extraordinary evidence, careful engineering, and robust safeguards, none of which are yet convincingly in view.
The Promise vs. The Implementation
The idea is straightforward: âdetect abuse material and grooming earlyâ. The implementation, however, is anything but simple. Chat apps today use end-to-end encryption, which means that the phone of the sender encrypts the message, and only the recipientâs phone can decrypt it. While the message is in transit or stored on servers, no one else can read it, not even the internet provider or app developer. This is a powerful privacy guarantee that protects not only criminals but also journalists, activists, domestic-violence survivors, and ordinary people.
If chat apps are to continue to use end-to-end encryption, scanning must happen on the phone before encryption, i.e. just as the message is being sent. The proposal allows âdetection ordersâ for known images (hash matching), ânew/similarâ images (perceptual matching), and grooming text. The model that decides what to flag will not be public. You will not see the rules. You will only encounter the system if a classifier is confident enough to escalate your case to the authorities.
This is not targeted surveillance of suspects. It is continuous monitoring of everyone. That is the fundamental change. Once device-level scanning is normalised, the citizen-state relationship changes, and so does your attack surface.
Competence, Humility, Evidence
Most politicians are not experts in cryptography, cybersecurity or artificial intelligence. That is understandable; it is not their primary role. But when legislating in these domains, humility is essential. Assumptions must be tested, experts consulted, and their insights incorporated, even if not fully understood or agreed on by the politicians.
Examining this proposed law reveals two uncomfortable facts. First, circumvention is relatively easy. If scanning happens before encryption, determined offenders can add a second encryption layer (such as PGP or even something as simple as Caesar cipher) or move to hardened, self-hosted infrastructure. The system will scan the easy targets, catch the careless, and miss those who are most intent on avoiding detection. This is a point noted by both regulators and the Internet Society. Second, the attack surface increases: mandated on-device classifiers, their update channels, indicator feeds and escalation paths become attractive targets for criminals and hostile states.
Build It Once, Use It Everywhere
Surveillance infrastructure, once established, tends to expand in scope. Once device-level scanning is normalised and implemented, adding new categories, such as terrorism, extremism, âdisinformationâ, tax or copyright, becomes a configuration change, not a new public debate. The technical capability is already present; only political will is required to broaden its use.
Many surveillance mechanisms introduced with narrow initial justification have expanded over time. RIPA in the UK, introduced for investigating serious crime, was later used by councils to monitor dog-fouling and school catchment fraud. The question to ask about any surveillance infrastructure is not only âwhat will it be used for today?â but âwhat could it be used for in ten years, under a different government, with different political priorities?â
Real Offenders Wonât Wait to Be Caught
Chat Control focuses on mainstream platforms like WhatsApp, Messenger, Signal and Telegram. Competent offenders are unlikely to remain on mainstream channels once on-device scanning is enabled; they will move to other platforms or add encryption layers. The result is that the lawful majority are scanned, while those intent on evading detection are likely to succeed.
It is similar to searching for criminals in a police station or tax evaders in a tax office. This is not the most effective way to catch high-value offenders. Signalâs technical analysis also discussed the re-branding of Client-Side Scanning as âupload moderationâ to downplay the surveillance aspect. They noted that this does not solve the core problem: if it happens before encryption, the end-to-end guarantee is broken. End-to-end encryption exists to guarantee that only the intended recipients can read the messages, and introducing scanning on the senderâs device compromises this guarantee. What is the point of locking the door, if the thief has the key?
If privacy is outlawed, only outlaws will have privacy.
Philip Zimmermann, the creator of PGP, observed this trend already decades ago, which has never been more true than today.
Internet Society: Client-Side Scanning: What It Is and Why It Threatens Trustworthy, Private Communication â Internet Society did an independent technical analysis concluding that circumvention is easy and that scanning primarily catches the careless while missing determined offenders.
Signal: New Branding, Same Scanning: âUpload Moderationâ Undermines End-to-End Encryption â Signalâs response to the rebranding of client-side scanning as âupload moderationâ, arguing the end-to-end guarantee is broken regardless of the terminology used.
AI Makes Mistakes. At EU Scale, Mistakes Become Policy
Even if we accept the premise of pre-emptive scanning, significant challenges remain. How do we guarantee the effectiveness of the underlying artificial intelligence and detection mechanisms? Even a tiny error rate becomes significant when applied across billions of messages and photos. Every time the system produces a false positive, someone at the police station must investigate, diverting resources from real cases. Each false report consumes investigator hours and can drag innocent families into processes they never deserved while real victims may wait longer for help.
At the same time, context is lost when a model sees only pixels, not relationships: pool photos, bath-time pictures to grandparents, dermatology images for a paediatric consult, screenshots from parenting forums. The artificial intelligence cannot understand who the people in these images are or who you are sending them to. If flagged, you cannot provide context or an explanation; you are simply presumed guilty until proven innocent.
The Exemption Question: Professional Secrecy or Unequal Treatment?
Another key point in the debate is the question of exemptions. Even after accepting the premise of pre-emptive scanning and the challenges of AI accuracy, who gets exempted from this surveillance? The proposal currently includes exemptions for certain professions and roles, citing professional secrecy as justification. Ministers, commissioners, generals and party leaders may be excluded from scanning, but not doctors or journalists. The rationale is that some communications require confidentiality for the public good, but the argumentation is inconsistent. A public official, like a politician, is employed by the public, so why hide their communications from public oversight?
This raises questions about equal treatment. If the technology is truly safe and necessary, why should any group be exempt at all? Professional secrecy is important, but so too is the privacy of ordinary citizens, including families, support workers, and vulnerable individuals. Exemptions risk creating a two-tier system, where privacy is protected for some but not for others. The public deserves a clear explanation of why certain groups are excluded, and whether this is genuinely about professional secrecy or simply about shielding those in power.
It is also worth noting that no group is immune to controversy. Danish politicians have had several scandals in recent times. A former Danish Minister of Foreign Affairs was implicated in a controversy involving sexual misconduct towards a 15-year-old participant at a Social Democratic Youth of Denmark camp. At the time of the incident, the politician was 34 years old. A former Minister of Industry, Business and Financial Affairs was reported to possess more than 6,200 photographs and more than 2,200 videos containing sexual abuse material involving minors, as well as a child-like sex doll. Lastly, a former member of parliament acknowledged that, when he was 28, he had a romantic relationship with a 15-year-old girl. He was expelled from his party for breaching its code of conduct, the police later dropped the grooming case, and he has since confirmed they are still together. The last example is the essence of grooming.
Why can this group of citizens invoke the right to privacy, while ordinary citizens cannot?
Security Is a System, Not a Slogan
End-to-end encryption supports the security of banking, healthcare, domestic-violence shelters, journalism, elections and the private lives of ordinary people. Mandated server backdoors and client-side scanners both weaken this chain. It is not possible to have âstrongâ end-to-end encryption and device-level surveillance at the same time.
In 2021, Apple announced plans to scan iCloud Photos on-device for CSAM before upload. After significant pushback from security researchers, privacy advocates, and civil liberties organisations, Apple abandoned the plan, opting instead for narrower, opt-in child safety features.
Apple had full control over its hardware, operating system, and infrastructure and still could not make client-side scanning trustworthy enough to ship. The EUâs proposal would apply the same approach across hundreds of device manufacturers and operating systems.
What Actually Moves Outcomes
If the goal is fewer victims and more convictions, efforts should focus where harm and profit concentrate, rather than treating the entire population as suspects. Target distribution networks and repeat offenders, freeze hosting infrastructure and trace payments, improve reporting pipelines so investigators receive fewer, higher-quality tips with context, and run targeted, warrant-backed operations that stand up in court.
The Questions EU Leaders Must Answer Clearly, in Public
Legitimacy first: what is the legal precedent that justifies pre-crime scanning of private correspondence?
Effectiveness next: what measured increase in arrests and convictions do you expect from endpoint scanning versus targeted warrants and infrastructure take-downs, and where are the peer-reviewed evaluations?
Then safety: what are the audited false-positive and false-negative rates per modality at EU scale, and how will you stop automated tips overwhelming frontline units?
Finally, security: what is the threat model for the mandated scanner itself, including update channels and indicator feeds, and how will you prevent hostile reuse of those hooks? If you cannot answer these cleanly, perhaps the mechanism is not ready.
Europeâs Choice
Europe can both defend private correspondence and protect children with precise warrants, resourced investigators and pressure where abuse actually scales. Or it can normalise pre-emptive device scanning, risk overwhelming police with false positives, and widen everyoneâs attack surface. If the system is truly safe and accurate, apply it without exemptions. If that cannot be done, the public deserves to know why. The best policy is one that delivers both safety and dignity.
If you want to learn more, I suggest visiting Fight Chat Control.
- Added further references on claims regarding high-profile politicians.
The views expressed in this article are the author's own and do not represent the positions of any employer, institution, or organisation the author is affiliated with. All information provided in this article is based solely on public knowledge.
Niclas Hedam
PhD, Computer Science
Niclas Hedam holds a PhD in Computer Science from the IT University of Copenhagen. He is passionate about educating others on the importance of safeguarding personal information online.

