Key takeaways

  • Following numerous dismissals of class actions targeting website session-replay technology—which collects data concerning on-website keystrokes and mouse movements—the plaintiffs' bar has commenced a new wave of lawsuits alleging that chatbots violate the California Invasion of Privacy Act.
  • Many of the same defenses that were successful against the session-replay cases should defeat claims targeting chatbots.
  • Businesses should consider steps to protect themselves from chatbot-focused claims by, for example, reviewing privacy policies to ensure their use of technology is transparent, obtaining affirmative consent to certain data collection practices, particularly those that involve recording and the use of chatbots and appropriately allocating risk and liability with vendors.

The session replay wave flows and ebbs

Over the past few years, businesses have been hit by a wave of class actions alleging that ubiquitous marketing-analytics technology used on modern websites—including many federal court websites—violates federal and state wiretapping laws. Dozens of these lawsuits have been initiated by just a handful of plaintiffs' firms, with the bulk of them filed in federal courts in California and Florida. At the center of these claims is a commonplace technology known as session replay, which allows a website operator to simulate a video of a user's keystrokes and mouse movements in order to obtain analytics used to improve website performance. The lawsuits generally allege that use of session-replay technology amounts to "wiretapping" the user's internet connection in order to surreptitiously record the user's keystrokes and mouse movements.

A variety of defenses, including consent, agency, the party exception and jurisdictional challenges, have proven largely successful at keeping these claims at bay. For example, in dismissing claims alleged under Section 631(a) of the California Invasion of Privacy Act (CIPA), one court recognized that the session-replay vendor merely provided a tool for the online business "to record and analyze its own data in aid of [the company's own] business."1 The court thus aptly concluded, "There is no equivalent of a wiretap here, which supports the conclusion that [the session-replay vendor] is not a third-party eavesdropper."2

Others have successfully defeated session-replay claims by arguing the data collected do not amount to substantive communications covered by the relevant wiretapping laws. Indeed, several courts evaluating claims under the Florida Security of Communications Act (FSCA) have concluded that mouse movements, keystrokes, and similar data input by a user on a website have no “contents” within the meaning of the FSCA because they do not convey the substance of any particular communication. Based on this reasoning, a number of session-replay claims under the FSCA have been dismissed with prejudice.

The chatbot wave begins

While the initial session-replay wave appears to be receding, some of California's most prolific plaintiffs' attorneys have already begun asserting a new—or at least modified—wave of actions. Although the modified claims still allege wiretapping violations based on use of session replay, they further allege violations based on the website's use of a virtual chatbot, which allows a website visitor to engage in a text conversation with a virtual assistant. To date, these chatbot claims have been alleged under CIPA, but at least one plaintiff has already alleged a claim under the FSCA for recording of “live” chat conversations.

Despite repeating the copy-paste, session-replay claims that have not fared well, this new wave of actions seems to acknowledge that a substantive communication is necessary for a viable wiretapping claim. Unlike keystrokes and mouse movements, the contents entered by a user in a chatbot may contain substantive questions or information. Accordingly, the no-substance argument will likely fail in this context.

Regardless, many of the other defenses to session-replay claims remain viable here. Indeed, properly obtained consent can arguably shut down a majority of these claims. While details of session replay are often included in a website's privacy policy and other disclosures, a short specific consent or just-in-time notice before engaging with a chatbot is a workable solution for helping ensure the individual is voluntarily providing information. While plaintiffs may argue that such consent extends only to the website operator, so long as the third-party chatbot vendor is not intercepting communications in transit or using the information for its own purposes, existing caselaw supports dismissing these claims.

How to shore up defenses

While consent, agency, and the party exception will likely remain viable defenses to chatbot-targeted CIPA claims, businesses that deploy this technology should consider taking steps to protect themselves from protracted litigation if such claims are alleged. Indeed, there are a number of tangible actions that may reduce exposure, including, for example, the following:

  • Ensure that website privacy policies are clear and transparent in disclosing what technologies are used on the site, as well as what data is collected and how it is used
  • Consider obtaining affirmative consent for certain privacy practices through use of conspicuously placed web banners or just-in-time notices
  • Make sure privacy policies are conspicuous and accessible – make the policy available in the footer wherever personal information is collected
  • Obtain affirmative consent prior to allowing a user to engage with a chatbot
  • Review and, if necessary, renegotiate vendor agreements to ensure that risk and liability are allocated appropriately

Footnotes

1   Graham v. Noom, Inc., 533 F. Supp. 3d 823, 832 (N.D. Cal. 2021).

2   Id. at 833.



Contacts

Partner-in-Charge, Los Angeles
Senior Associate

Recent publications

Subscribe and stay up to date with the latest legal news, information and events . . .