PH Privacy
AI Transcription Tools: When a Robot Is Listening, Courts May Find It Is Wiretapping
April 30, 2026
By Aaron Charfoos, Michelle A. Reed and William M. Chaskes
Companies are increasingly turning to AI to support or run their customer service operations and potentially opening the door to significant legal risks. Few states have AI-specific laws on the books but regulators and plaintiffs’ attorneys are relying on older laws (wiretap, biometrics, common law privacy, etc.) to challenge these new practices. Companies should ensure that they have the appropriate privacy and cybersecurity programs in place before putting these new technologies front and center.
A recent case[1] in the Northern District of California implicates AI transcription, recording and analytics tools as an AI company (Otter.AI) faces California Invasion of Privacy Act (CIPA)[2] claims arguing that Otter’s AI transcription tool is a distinct legal person acting as a third party, not merely a passive tool controlled by the meeting host. In this case, Otter’s meeting bot is alleged to have joined meetings without consent and transmitted data to Otter to transcribe and to improve Otter’s speech recognition and machine-learning models. The merits of this case have yet to be adjudicated. Courts are grappling with how to classify AI transcription tools: Are they third parties or merely software tools?
Recently, a group of plaintiffs filed another suit against companies alleging that the use of AI-enabled recording, transcription and analytics implicate vendors as unauthorized third parties when such data is captured or reused in violation of CIPA. For example, plaintiffs recently survived a motion to dismiss in a case in the Northern District of California based on the novel legal considerations presented by AI in a customer service context. In that case, companies used a third-party AI software for transcription, smart replies and analytics in customer service calls. The plaintiffs alleged that they were not informed and did not consent to the transcription and analysis. The court emphasized that the third-party vendor’s mere capability to use the data for its own purposes was sufficient to implicate CIPA liability. Since the third-party vendor had access to the data and could have used it for training purposes, coupled with the allegation of lack of consent, the court found that the vendor tool met the capability test under applicable precedent as a third party and CIPA was therefore plausibly alleged.
Another recently filed complaint in the Northern District of California implicates similar issues as the call center scenarios described above but implicates the use of AI transcription tools in a medical context. In this case, the plaintiffs allege CIPA violations and other statutory privacy claims arising from health providers’ deployment of an “ambient clinical documentation” system — an AI‑powered recording technology that captures and transcribes physician-patient conversations. As this case was recently filed, facts surrounding patient consent and the technology sending recordings to the third-party AI-tool developer vendor will be important as the case approaches the motion-to-dismiss stage of the proceeding.
Previously, CIPA claims like those outlined above would revolve around pixel tracking. Compared to AI, pixel tracking data has limited utility, especially compared to the expansive possible applications of AI data in training or other contexts. In light of this new CIPA application by courts in an AI context, the growing use of AI in all industries and all data being possibly useful for AI training or other AI purposes, the crucial considerations become what data is going to whom and for what purpose. Once that is determined, informing and obtaining the consent of all parties is recommended.
Compliance Checklist
Companies that use AI in their call center operations, whether via third-party call center support vendors or using their own internally developed solutions, can mitigate their risk under this new line of cases by considering the following:
- Inventory and Identify. Inventory technologies and tools (whether developed in-house or provided by third-party vendors) across channels for recording, transcription and analytics capabilities and classify data accordingly. Identify technology and tools that can access, store or process call audio transcripts.
- Disclose. Whether companies collect information on calls or on their websites, they must clearly disclose to consumers what they collect and how they use data. They should also be clear if they are using data for training and reuse in AI models.
- Notice and Consent. Notice and clear, informed consent of all parties whose data is being collected, transcribed or recorded (especially for training purposes) should be sought and obtained where feasible with simple refusal functionality for non-account participants. Under CIPA, implied consent — inferred from the surrounding circumstances, such as adequate notice of recording — is permitted.[3]
- Vendor Contracts. Know if your vendors are reusing your data, especially for their own AI training or other purposes. Update vendor agreements to permit only processing to your purposes, prohibit vendor training without explicit consent and incorporate language requiring vendors to delete or suppress your data in applicable scenarios.
- Recordkeeping. Retention periods should be calibrated to comply with consumer information request statutory time periods.
- Logging. Consent, vendor access and training should be noted in audit-ready format.
- Testing. Companies should test their processes from a customer-centric viewpoint (across all channels, i.e., internet, phone and smartphone) and evaluate how well their refusal and consent frameworks work in practice.
Contributors

