TL;DR 47 out of 50 US states now have deepfake legislation on the books. The laws break down into three broad categories: election integrity, non-consensual intimate imagery, and fraud. The regulatory patchwork is dense, inconsistent, and expanding fast. Here's what each category actually does, where the federal government stands, and what's still missing. --- The Numbers State-level deepfake legislation has accelerated dramatically: 2019: Virginia and Texas became the first states to criminalize deepfake pornography and election deepfakes, respectively. 2023: A wave of bills followed the spread of AI-generated explicit images of real people, particularly targeting minors. 2025: 64 new deepfake laws were enacted across states, making it the single biggest year for this legislation. Michigan became the 47th state in August 2025. 2026: 15 more bills have been enacted so far this year, though no new states have been added to the list. Alaska, Missouri, and New Mexico remain without specific deepfake statutes. The pace shows no sign of slowing. Lawmakers in every state introduced some form of sexual deepfake legislation in 2025 alone, according to MultiState's tracking. --- Category 1: Election Integrity Deepfakes At least 25 states have enacted laws specifically targeting deceptive AI-generated media in election contexts. These laws typically: Prohibit distributing deepfakes of candidates, elected officials, or election workers with the intent to deceive voters Require disclosures when AI-generated or manipulated media is used in political advertising Establish timing windows — many laws only apply within 30, 60, or 90 days of an election Provide exceptions for parody, satire, and content that includes a clear disclaimer Notable examples: California (AB 730, AB 2652): Prohibits distributing deceptive audio or visual media of a candidate within 60 days of an election. Also requires labeling AI-generated content in political ads. Texas (SB 751): One of the earliest election deepfake laws, criminalizing the creation and distribution of deepfakes intended to influence elections. Minnesota (SF 471): Makes it a gross misdemeanor to distribute deepfakes of political candidates within 90 days of an election. Michigan (SB 417-419): Criminalizes deepfakes intended to damage a candidate's reputation or influence an election, with enhanced penalties near election dates. The First Amendment looms large over these laws. Several have been challenged in court, and courts have generally required that laws target knowing distribution of deceptive content with intent to harm, rather than simply banning all AI-generated political media. --- Category 2: Non-Consensual Intimate Imagery (NCII) This is the most widely addressed category. Nearly all 47 states with deepfake laws include provisions against AI-generated non-consensual intimate imagery. These laws: Criminalize creating, sharing, or threatening to share AI-generated nude or sexual images of someone without their consent Penalties range from misdemeanors to felonies, with enhanced penalties when the victim is a minor Allow civil lawsuits in some states, letting victims sue for damages Address extortion specifically — several states add enhanced penalties when deepfake NCII is used for sextortion Notable examples: Virginia (SB 1357): The first state to criminalize deepfake pornography in 2019. Updated in subsequent sessions to increase penalties. Louisiana (SB 44): Criminalizes creation and distribution of deepfake NCII with penalties up to 10 years when the victim is under 17. New York (A02249): Added civil remedies and registration requirements for AI-generated replicas of individuals, heavily shaped by entertainment industry advocates concerned about AI replication of performers. Florida (SB 168): Makes non-consensual deepfake pornography a felony, with enhanced penalties for content depicting minors. The federal DEFIANCE Act, signed in 2024, allows victims to file civil lawsuits for damages in federal court, but it does not create a federal criminal offense. State criminal laws remain the primary enforcement mechanism. --- Category 3: Fraud and Financial Crimes Many states have updated existing fraud, identity theft, and extortion statutes to address deepfakes as a tool rather than a standalone offense. These laws: Add sentencing enhancements when deepfakes are used to commit wire fraud, insurance fraud, or extortion Criminalize using AI-generated voice or video to impersonate someone for financial gain Address business email compromise (BEC) — the FBI has reported that deepfake audio is increasingly used in BEC scams This category is the least uniform. Some states have standalone deepfake fraud statutes; others simply let prosecutors apply existing fraud laws with deepfake technology as an aggravating factor. --- What's Missing Despite the legislative momentum, significant gaps remain: No comprehensive federal law. The DEFIANCE Act addresses only NCII in civil court. There is no federal criminal statute specifically targeting deepfakes. Pending bills include the NO FAKES Act (creating a property right in one's likeness) and the Deepfake Task Force Act, but neither has passed. No labeling mandate. No state requires universal labeling of AI-generated content. Some election laws require disclosures for political ads, but there is no general obligation to label synthetic media. Enforcement is weak. Criminal penalties exist on paper, but prosecution rates are low. Victims of NCII often struggle to identify perpetrators, and jurisdictional issues complicate cases crossing state lines. No platform accountability. Current laws primarily target creators and distributors of deepfakes, not the platforms that host them. Section 230 of the Communications Decency Act generally shields platforms from liability for user-generated content. Three states with no laws at all. Alaska, Missouri, and New Mexico lack any deepfake-specific legislation. Residents of these states have fewer legal remedies than those in the other 47 states. --- International Context The EU AI Act, which took effect in phases starting in 2024, classifies AI systems that generate deceptive content as "limited risk" and requires transparency measures. The EU's approach is more systematic than the US state-by-state patchwork, but enforcement is still ramping up. China's deep synthesis regulations, in effect since January 2023, require labels on all AI-generated content and impose obligations on platforms. The UK's Online Safety Act includes provisions on intimate image abuse that cover deepfakes. --- What You Can Do Check your state. Use the Ballotpedia deepfake legislation tracker or the Free Speech Coalition's age verification/action center to find your state's specific laws. Report NCII. If you're a victim of non-consensual deepfake imagery, the Cyber Civil Rights Initiative operates a removal helpline and can connect you with legal resources. Support federal legislation. The NO FAKES Act and a standalone federal criminal statute for NCII would close the biggest gaps in the current patchwork. Use detection tools. Our Media Forensics Inspector can analyze images for signs of AI generation. It runs entirely in your browser — nothing is uploaded. --- Sources Ballotpedia, "State Deepfake Laws Hit Record Pace" (2025) Cybersecurity Law Report, "Navigating the Patchwork of Federal and State AI Deepfake Laws" Public Citizen, "25 States Enact Laws to Regulate Election Deepfakes" (2025) MultiState, "State Deepfake Laws in 2026: What's Changed and What's Next" Ballotpedia News, "15 Deepfake Bills Enacted So Far This Year" (April 2026)