In early 2026, a lawsuit filed in San Diego Federal Court made a specific and alarming allegation: Sharp HealthCare, one of the largest healthcare systems in Southern California, had been using AI to record, transcribe, and cloud-process medical consultations without telling patients. The lawsuit did not claim the recordings were used for treatment. It alleged something more specific: that Sharp had implemented an AI-powered "ambient clinical intelligence" system -- the kind of tool that listens to the doctor-patient conversation, generates notes, and drafts clinical documentation -- and that this was done without the knowledge or consent of the patients being recorded. This is what the medical privacy debate looks like when it moves from theory to practice. What the Lawsuit Alleges According to the complaint, Sharp HealthCare deployed an AI system that: Used electronic recording devices to capture entire medical consultations Transmitted those recordings to a cloud-based AI processing system Used that AI to generate clinical notes, documentation, and summaries Never obtained written or verbal consent from patients Never disclosed to patients that their conversations were being processed by a third-party AI The lawsuit alleged violations of the California Invasion of Privacy Act, the California Confidentiality of Medical Information Act, and common-law privacy claims. What makes this significant is not just the scale -- Sharp HealthCare serves millions of patients -- but the specific failure mode: this was not a data breach where information was stolen. It was an open, ongoing recording of intimate medical conversations, processed by an AI the patient never knew existed. HIPAA Was Not Written For This The Health Insurance Portability and Accountability Act (HIPAA) was passed in 1996. Its core framework: covered entities (hospitals, insurers, clinics) must protect individually identifiable health information. Business associates (vendors who handle PHI on behalf of covered entities) must sign agreements and comply with HIPAA's security requirements. But HIPAA was not designed for a world where AI systems passively listen to every consultation and extract clinically relevant information in real time. The act's "minimum necessary" standard assumes that humans are handling data -- and that the data being handled is being used for a specific, defined purpose. AI ambient clinical intelligence tools create a different risk profile: they process everything said in the room, including conversations about reproductive health, mental health, substance use, domestic violence, and other topics that patients may specifically disclose only because they trust the medical setting to be private. If that data is being sent to a cloud-based AI -- even if the AI is generating notes for the physician -- the data is flowing to a third party outside the direct treatment relationship. Under HIPAA, that requires a business associate agreement and compliance with the HIPAA Security Rule's requirements for cloud processing of PHI. The question the lawsuit raises: did Sharp have proper business associate agreements with its AI vendor? Did the AI vendor comply with HIPAA's technical safeguard requirements? Was the data being used only for the clinical documentation purpose, or was it being retained, used for AI training, or shared with other parties? The FTC's Interest in AI in Healthcare The FTC has been explicit about its interest in AI healthcare applications. In a 2026 policy statement, the FTC noted that AI systems in healthcare settings that collect sensitive data are a priority enforcement area. The FTC's concern is the "surveillance capitalism" model applied to healthcare. When a hospital deploys an AI that records every consultation, it is building a dataset of intimate medical conversations. That dataset is enormously valuable -- for training future AI models, for quality improvement, for research, for benchmarking. If patients are not told about this data collection, they cannot consent to it. If they cannot consent, the data is being extracted from them without consideration. This is the same argument that has been made about social media platforms. The healthcare context makes it more acute because the data is not behavioral -- it is the most intimate kind of personal information. The Medical AI Consent Problem The Sharp lawsuit highlights a structural problem in the deployment of AI clinical documentation tools. These tools are being sold to physicians as efficiency tools -- the pitch is that doctors spend 2 hours on documentation for every 1 hour of patient care, and AI can fix that. The pitch works. Physician burnout from EHR documentation burden is a documented crisis. Medical systems are deploying ambient AI tools to reduce documentation time, and the physicians generally welcome them. But the consent question is almost never handled at the patient level. The AI vendor's standard contract puts the obligation to inform patients on the medical provider. The medical provider, focused on clinical operations, often does not create a patient-facing consent process. Patients do not know they are being recorded. The result: an intimate medical conversation is recorded and processed by an AI, the patient never knew existed, for a purpose the patient was never told about. The Broader Pattern: AI in the Exam Room The Sharp case is not unique. Similar allegations have been made against other health systems deploying AI documentation tools. The industry pattern is consistent: deploy the AI, let it record everything, deal with the consent question later or not at all. This parallels the early days of health information exchanges, when hospitals shared patient data with third parties through electronic health record vendors without consistent patient notification or consent. The regulatory response came later, after the practices were widespread. With AI clinical documentation, the same pattern is emerging. The question is whether the regulatory response will come before the practices are entrenched. What HIPAA Actually Requires for AI Processing The HHS guidance on HIPAA and AI, published in 2024 and expanded in 2026, clarifies several points: AI processing of PHI requires HIPAA compliance If an AI vendor is receiving, storing, or processing PHI, it is a business associate. It must have a Business Associate Agreement (BAA) with the covered entity. The AI vendor must comply with HIPAA's administrative, physical, and technical safeguard requirements. Patient authorization may be required for AI processing If the AI processing goes beyond the "treatment, payment, and operations" scope, patient authorization may be required. The question of whether AI-generated clinical documentation falls within TPO or requires separate authorization is actively being litigated. Minimum necessary standard applies The AI system should access only the information it needs to generate clinical notes -- not every word spoken in a consultation. HHS has noted that AI tools that capture all ambient conversation and process everything are not applying minimum necessary standards. Audit trails are required If an AI is processing consultation recordings, there should be an audit trail showing when data was accessed, by whom, for what purpose. HIPAA requires this. Most AI vendors do not make it easy for covered entities to get these audit trails. What Patients Can Do If you are a patient at a health system that uses AI documentation: You can ask your provider whether AI is being used to process your consultations You can request access to any records of your consultation that were processed by AI You can file a complaint with HHS OCR (Office for Civil Rights) if you believe your HIPAA rights were violated You can opt out of AI processing in many states -- the process varies by provider The Industry Response The American Medical Association and other medical professional organizations have issued guidance on AI clinical documentation tools. Their consensus: these tools can reduce physician burnout, but they must be deployed with proper patient notification and consent processes. The problem is that the guidance is not binding. The medical systems deploying these tools are doing so with vendor contracts that shift the consent burden downstream. The vendors are not in a direct relationship with patients. The result is that no one in the chain is explicitly handling consent. The Sharp lawsuit is attempting to close that gap through litigation. Whether it succeeds or settles, it will create case law on what hospitals must tell patients about AI documentation tools. That case law is coming faster than the industry's consent processes.