On February 2, 2025, the first enforcement provisions of the EU Artificial Intelligence Act took effect. Social scoring, subliminal manipulation, untargeted facial scraping, and real-time biometric surveillance in public spaces became illegal across the European Union. Violations carry fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. On August 2, 2026, the next major phase arrives for many AI Act obligations, including transparency duties and parts of the high-risk framework. But the high-risk timeline is no longer a simple single date. As of May 2026, the European Commission and Council have moved to adjust some application dates because harmonized standards and support tools are not fully ready. The EU AI Act is the most comprehensive AI regulation ever enacted. It is also a document where the distance between what sounds prohibited and what actually is prohibited requires careful reading. What is Already Banned Since February 2, 2025, the following AI practices are prohibited in the EU: Social scoring by public authorities. AI systems that evaluate or classify people based on social behavior or personal characteristics in ways that lead to detrimental or disproportionate treatment are banned. This applies to systems operated by or on behalf of governments. Untargeted facial image scraping. Building facial recognition databases by indiscriminately scraping facial images from the internet or CCTV footage is prohibited. This targets the business model of companies like Clearview AI, which built a database of over 30 billion images scraped from social media without consent. Real-time remote biometric identification in public spaces. Using AI to identify people in real time through cameras in publicly accessible spaces is banned for law enforcement, with three narrow exceptions: Searching for victims of kidnapping, trafficking, or sexual exploitation Preventing an imminent threat to the life or physical safety of a person Locating or identifying a person suspected of a criminal offense carrying a custodial sentence of at least four years Even within these exceptions, the system requires prior judicial authorization, a necessity and proportionality assessment, and reporting obligations to the national supervisory authority. Emotion recognition in workplaces and educational institutions. AI that infers emotions from biometric data is banned in employment and education settings, except for medical or safety purposes. Subliminal manipulation and exploitation of vulnerabilities. AI that manipulates behavior through subliminal techniques or exploits vulnerabilities related to age, disability, or socioeconomic situation is prohibited. Predictive policing based solely on profiling. AI systems that assess the risk of a person committing a criminal offense based solely on profiling or personality traits are banned. What High-Risk Means in the 2026 Timeline When high-risk obligations apply, AI systems in the following categories face new requirements. Some obligations still attach to August 2026, while some product-embedded high-risk systems now have later transition dates under the EU's simplification package: Biometric identification systems. Both real-time (already restricted) and post-processing facial recognition are classified as high-risk. This means retrospective facial recognition — identifying someone from stored footage after an event — remains legal. It is subject to risk management, data governance, transparency, and human oversight requirements, but it is not banned. Law enforcement AI. Systems used for polygraph and emotion detection, evaluating the reliability of evidence, assessing the risk of reoffending, and predicting criminal offenses are all high-risk. Employment and worker management. AI used for recruitment, screening job applicants, making promotion and termination decisions, and monitoring worker performance is classified as high-risk. Education and vocational training. Systems that determine access to educational institutions, evaluate learning outcomes, or assess student behavior fall into the high-risk category. Access to essential services. AI that evaluates creditworthiness, determines eligibility for public assistance, or assesses insurance risk is high-risk. Migration and border management. Systems used by border authorities for visa processing, asylum applications, and border surveillance carry high-risk obligations. The Facial Recognition Gap The distinction between real-time and retrospective facial recognition is the most consequential loophole in the Act. Real-time biometric identification — scanning faces as people move through a public space and identifying them in the moment — is effectively banned for law enforcement. The exceptions are narrow and require judicial authorization. Retrospective facial recognition — taking stored footage and running it through an identification system after the fact — is classified as high-risk, not prohibited. Police departments across the EU can continue to use facial recognition on recorded video, subject to documentation and oversight requirements that are meaningful but do not constitute a ban. Privacy International has noted that the AI Act addresses the legal void around facial recognition technology but does not fill it entirely. The high-risk classification for retrospective FRT requires additional conditions and safeguards, including mandatory fundamental rights impact assessments, but it does not prevent police from identifying individuals from surveillance footage without their knowledge. The practical effect: a police department cannot set up a live facial recognition camera at a protest and identify participants in real time. But it can record the protest, take the footage back to the station, and run facial recognition on it the next day. The difference matters. The Social Scoring Loophole The prohibition on social scoring sounds comprehensive. It is not. The Act prohibits AI systems operated by or on behalf of public authorities that evaluate or classify people based on social behavior or personal characteristics, leading to detrimental treatment. This targets the Chinese social credit model specifically. What it does not cover: Private-sector credit scoring. Companies like FICO, Experian, and Equifax continue to operate scoring systems that reduce people to numbers determining access to housing, employment, insurance, and financial services. These systems are classified as high-risk, not prohibited. The EU's prohibition on social scoring applies to government-operated systems that function like China's Social Credit System. It does not apply to the credit scoring infrastructure that already serves a similar function in Western economies. Insurance risk scoring. AI systems that assess risk for insurance purposes are high-risk, not prohibited. The scoring mechanisms that determine whether you can get health, auto, or life insurance — and how much you pay — face documentation requirements but are not banned. Algorithmic ranking and rating systems. Platforms that score users for trustworthiness, reliability, or desirability — whether for gig work, housing, or social access — are not automatically captured by the social scoring prohibition. Whether a particular system qualifies depends on whether it is operated by or on behalf of a public authority and whether it leads to detrimental treatment in a way the Act defines. The prohibition is real. The gap between what people assume is banned and what is actually banned is also real. Enforcement and the Deadline Question The European Commission has acknowledged that the timeline is tight. In its FAQ on navigating the AI Act, the Commission says delayed standards put the August 2, 2026 application of high-risk rules at risk, and it has proposed linking some application dates to the availability of support measures such as harmonized standards, common specifications, and Commission guidelines. As of May 2026, the harmonized technical standards that organizations need to demonstrate compliance with the high-risk requirements have not all been published. The European standardization organizations (CEN, CENELEC, and ETSI) are still developing them. This creates a compliance problem: organizations are expected to meet requirements defined by standards that do not exist yet. The Commission's proposed solution -- extending deadlines for rules dependent on standards -- has drawn criticism from digital rights organizations who argue that delay benefits the surveillance technology industry at the expense of the people these rules are meant to protect. Enforcement itself falls to national supervisory authorities in each EU member state. The quality and capacity of these authorities varies. Some countries have well-resourced data protection authorities. Others do not. The AI Act creates obligations but relies on individual member states to enforce them, which means enforcement will be uneven across the EU — just as GDPR enforcement has been uneven. What This Means for Surveillance Technology The AI Act creates real constraints for the surveillance technology industry operating in Europe: Clearview AI's business model is illegal. The untargeted scraping of facial images from the internet to build recognition databases is prohibited. Clearview AI has already been fined under GDPR by multiple EU data protection authorities. The AI Act adds a separate enforcement mechanism with potentially higher penalties. Live facial recognition by police is effectively banned. The three narrow exceptions require judicial authorization and proportionality assessments. Mass surveillance through real-time biometric identification in public spaces is not a lawful option under the Act. Predictive policing algorithms face scrutiny. Systems that profile individuals for criminal risk assessment based on personality traits or past behavior are prohibited. This directly affects the product offerings of companies like Palantir and Honeywell, which have marketed predictive policing tools to European law enforcement. Workplace emotion recognition is banned. Employers cannot deploy AI that infers workers' emotional states from biometric data. This shuts down a category of surveillance technology that was being piloted in several EU countries for remote work monitoring. At the same time, the Act leaves significant terrain unaddressed: Private-sector scoring continues. The credit bureaus, insurance risk modelers, and platform ranking systems that already function as de facto social scoring mechanisms face heightened requirements but not prohibition. Retrospective facial recognition persists. Law enforcement can continue to identify individuals from recorded footage, subject to safeguards that are meaningful but not prohibitive. The border surveillance exemption. AI systems used for migration and border management are classified as high-risk rather than prohibited, creating a differential standard for the treatment of citizens versus non-citizens. How This Compares to the United States There is no federal AI regulation in the United States. The EU AI Act has no American equivalent. Where the EU has enacted a comprehensive regulatory framework with enforceable prohibitions, the US has a patchwork of state laws, executive orders, and voluntary commitments. Several US cities — including San Francisco, Boston, and Portland — have banned government use of facial recognition. But these are local ordinances, not federal law. There is no national prohibition on real-time biometric surveillance. There is no federal ban on predictive policing algorithms. There is no US equivalent to the AI Act's prohibition on social scoring. The contrast is instructive. The EU has chosen regulation with enforcement teeth. The US has chosen to wait. The result is that the same surveillance technology banned in Berlin can be legally deployed in Boston — not because the technology is different, but because the regulatory framework is. What Happens Next August 2, 2026 is still the next critical date, but it is not the whole story. Organizations deploying AI systems need to track which provision applies to their system category and whether a delayed high-risk deadline applies. Key dates after that: August 2, 2027 and later: Additional transition periods apply for some existing or product-embedded high-risk systems, with the exact date depending on the system category and the final Digital Omnibus text. Ongoing: National supervisory authorities begin enforcement actions. The first fines under the prohibited practices provisions are expected in 2026-2027. Uncertain: The European Commission may propose amendments as implementation reveals gaps. The "Digital Omnibus" package currently in trilogue includes proposed modifications to the AI Act's high-risk provisions. The AI Act is not the end of AI regulation in Europe. It is the beginning. The rules will be tested in court, challenged by industry, and refined by experience. What matters is that the framework exists — and that it establishes the principle that surveillance technology requires democratic consent. The EU decided to regulate AI before it became entrenched. The United States did not. The difference between those choices will shape privacy outcomes for a generation. --- _Sources include Regulation (EU) 2024/1689 (the AI Act), the European Commission implementation timeline, Privacy International's analysis of facial recognition regulation, EFF's 2025 review, and the AI Act Service Desk FAQ._