What Khan v. Figma Signals for SaaS, Generative AI, and Data Governance
In November 2025, a proposed class action quietly marked a turning point in how courts may evaluate generative AI development. Khan v. Figma Inc. does not hinge on copyright infringement or scraped public data. Instead, it centers on something more fundamental and potentially more dangerous for software companies: consent.
The lawsuit alleges that Figma used customer design files to train generative AI tools after years of assuring users that their content would not be repurposed. According to the complaint, this shift occurred through default settings and background policy changes rather than affirmative user authorization. That factual posture matters because it reframes AI risk not as an intellectual property dispute, but as a contract, privacy, and trade secret problem.
A Different Legal Playbook for AI Liability
Most early generative AI litigation has focused on whether training data infringed copyrights. Khan v. Figma takes a different route. The plaintiffs rely on four interlocking theories that bypass fair use debates altogether.
First, breach of contract. The claim is straightforward. Users uploaded proprietary designs based on representations that their content would be used only to provide collaboration services. Repurposing that data for AI training allegedly violates the original bargain, regardless of whether the resulting AI output copies anything recognizable.
Second, trade secret misappropriation. Design files often contain confidential product plans, workflows, and unreleased features. The complaint argues that using these files to improve Figma’s AI tools allowed the company to extract competitive value from information users took reasonable steps to keep secret.
Third, the federal Stored Communications Act. Once content is stored in the cloud, the service provider becomes a custodian, not an owner. The plaintiffs argue that mining stored design files for AI training exceeded authorized access, triggering a federal privacy statute that carries statutory damages on a per user basis.
Fourth, California’s Unfair Competition Law. This claim functions as a backstop, alleging that quietly expanding data use for AI, while marketing trust and confidentiality, constitutes an unfair and deceptive business practice.
Taken together, these theories are powerful precisely because they do not require proof that an AI model produced infringing output. The alleged harm occurs at the moment of unauthorized data use.
Why This Case Emerged Now
The timing of Khan v. Figma is not accidental. Over the past two years, regulators and courts have increasingly signaled that companies cannot quietly broaden how they use customer data for AI.
The US Federal Trade Commission has publicly warned that retroactive or buried changes to data practices may be unlawful, particularly when companies previously promised narrower use.
Similar backlash episodes at Adobe, Zoom, and Slack showed how quickly trust erodes when AI initiatives appear to override prior privacy commitments. In each case, companies moved fast to clarify or reverse course. Figma allegedly did not, and that gap created litigation exposure.
What makes Figma especially vulnerable is the structure of its rollout. According to the complaint, enterprise customers were excluded from default AI training, while individual designers and small teams were not. That asymmetry strengthens the narrative that the company understood the risk, but shifted it onto users with less bargaining power.
The Broader Risk Signal for SaaS and AI Companies
For software and platform companies, the lesson is not limited to design tools. Any SaaS product that hosts user generated content now faces a similar risk profile if it deploys generative AI.
Three factors appear especially likely to trigger lawsuits going forward.
- Default opt in for AI training rather than affirmative opt in. Courts and regulators increasingly view silence or buried toggles as the absence of consent.
- Repurposing historical data collected under older terms. Using years of stored content to train new AI systems without renewed permission creates retroactivity problems that are difficult to defend.
- Sensitive or proprietary content. The more confidential the data, whether business documents, private messages, health information, or creative assets, the easier it is to frame AI training as misappropriation rather than innovation.
The risk is not purely civil. Remedies increasingly discussed by plaintiffs and regulators include injunctions requiring deletion of AI models trained on improperly obtained data. That is an existential threat to AI products built on customer content.
Why This Matters Beyond Figma
Khan v. Figma should be read as an early test case for a new enforcement theory. Instead of asking whether AI outputs are lawful, courts are being asked to decide whether companies respected the boundaries of consent when building those systems.
If this approach succeeds, it scales easily. It applies across industries, across data types, and across AI use cases. It also aligns closely with existing consumer protection and privacy doctrines, which could lead to a coming wave of analgous litigation.
For companies developing AI, the strategic implication is clear. Transparency and affirmative consent are no longer just best practices. They are becoming legal fault lines. Quiet policy edits, default toggles, and retrospective data use are no longer low risk implementation details. They are potential litigation triggers.
The Bottom Line
Khan v. Figma is not about whether AI should exist. It is about whether companies can quietly change the rules governing user data once AI becomes commercially attractive.
The case suggests that courts may be receptive to a simple principle: data collected under one set of promises cannot be reused for AI under another, without asking first. For the AI ecosystem, that principle may prove more consequential than any single copyright ruling.
Rain Intelligence exists to track these moments before they harden into doctrine.
We monitor early AI related lawsuits, regulatory signals, and plaintiff firm investigations to identify where consent, data use, and AI deployment are becoming litigation risks.
Book a demo and get early visibility into how today’s product decisions may become tomorrow’s claims.



