Meta Lied About Its Smart Glasses Protecting User Privacy, New Class Action Lawsuit Claims

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and tech

In a bombshell development shaking the foundations of consumer trust in wearable technology, Meta Platforms, Inc. is facing a scathing class action lawsuit accusing the tech giant of profoundly misleading its customers regarding the privacy features of its Ray-Ban smart glasses. This legal challenge erupts in the wake of a shocking investigative report that revealed Meta’s subcontracted data annotators were allegedly viewing highly intimate and personal footage captured by users through their devices, fundamentally betraying the company’s “privacy-first” marketing promises.

The controversy first erupted following a meticulous investigation by prominent Swedish newspapers, Svenska Dagbladet and Göteborgs-Posten. Their exposé, published after Meta reportedly sold an estimated seven million Ray-Ban smart glasses in 2025 alone, painted a disturbing picture. The investigation brought to light that human contractors in Nairobi, Kenya, employed by Meta’s subcontractors for data labeling, had access to and were reviewing sensitive footage captured by users. This included deeply personal moments, such as individuals in their bathrooms or during sexual encounters, captured unwittingly by devices marketed as extensions of personal style and convenience.

These damning revelations did more than just expose a critical operational flaw; they ripped open the curtain on a hidden, often uncomfortable truth within the burgeoning artificial intelligence industry. The development of sophisticated AI models, particularly those integrated into consumer-facing devices, relies heavily on vast quantities of labeled data. This data, essential for training algorithms to understand and interpret real-world scenarios, is frequently processed by a global workforce of human annotators, many of whom operate in low-wage environments far removed from the end-users. This “invisible workforce” performs the painstaking task of categorizing, transcribing, and reviewing data, ensuring AI systems can learn from diverse inputs. However, as this case starkly illustrates, the ethical implications of this reliance on overseas labor, especially when dealing with deeply personal and sensitive user data, are often overlooked or deliberately obscured in glossy marketing campaigns by some of the world’s most influential tech companies.

The public outcry was swift, and the legal repercussions were even swifter. Just days after the Swedish investigation sent shockwaves across the tech world, Meta found itself targeted by a class action lawsuit. Filed in a San Francisco district court on a recent Thursday and obtained by Futurism, the lawsuit alleges that Meta engaged in woefully misleading advertising practices. The core of the complaint centers on Meta’s pervasive marketing campaigns, which emphatically positioned privacy as a cornerstone of the Ray-Ban smart glasses experience.

The lawsuit’s language is uncompromising, directly challenging Meta’s carefully crafted public image. “No reasonable consumer would understand ‘designed for privacy, controlled by you’ and similar promises like ‘built for your privacy’ to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas,” the legal document states unequivocally. It goes on to charge that “Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false.” This assertion suggests a deliberate strategy to leverage privacy concerns, knowing that consumer apprehension about pervasive surveillance could be a significant barrier to adoption for smart wearables.

Yana Hart, a partner at Clarkson Law Firm, which filed the lawsuit, articulated the firm’s stance in a powerful statement. “You cannot market a product as ‘built for privacy’ and then funnel footage of people’s intimate moments to contract workers without their knowledge,” Hart emphasized. She further elaborated on the perceived manipulative nature of Meta’s strategy: “Meta made privacy the centerpiece of its marketing campaign because it knew consumers would never buy these glasses if they knew the truth.” This argument posits that Meta’s marketing was not merely an oversight but a calculated deception, exploiting consumer trust to drive sales of a product whose true operational mechanics contradicted its advertised safeguards.

The class action lawsuit is not merely seeking redress for individual consumers but aims to hold Meta accountable for systemic deception. It “seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline.” This broader objective underscores the potential for this case to set a precedent for how tech companies market and operate AI-powered consumer devices, particularly concerning data privacy and transparency regarding human involvement in data processing.

In response to the escalating controversy, a Meta spokesperson offered a limited statement to Engadget, acknowledging that data from its glasses might indeed end up in the hands of human contractors. However, the spokesperson notably declined to directly address the specific allegations made within the lawsuit. Furthermore, Meta reiterated a standard privacy claim, stating that “unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”

Yet, critics and the lawsuit itself contend that this statement is a deceptive half-truth, strategically omitting crucial details. What Meta reportedly fails to explain adequately is that utilizing the devices’ core AI features – such as voice commands, image recognition, or contextual understanding – is virtually impossible without authorizing the very data processing pipeline that includes human contractors. In essence, for the AI to function as advertised and learn to improve, the resulting footage and audio must be analyzed and annotated, often by human reviewers. This renders the “choice” to share media a moot point for users who wish to fully engage with the smart glasses’ primary functionalities. The lawsuit unequivocally claims that Meta did not adequately disclose that such intimate footage could be reviewed and annotated by human contractors, thereby transforming the smart glasses from a convenience into a significant privacy liability.

The potential ramifications of this alleged breach of trust are extensive and deeply troubling. The lawsuit meticulously details the spectrum of harm to which consumers are exposed. “The undisclosed human review pipeline renders the Meta AI Glasses’ privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury,” the document asserts. This paints a grim picture of a device designed for personal use inadvertently becoming a tool for potential exploitation and profound psychological distress.

The legal filing continues to elaborate on the pervasive nature of the risk: “The exposure of such content to thousands of unknown individuals creates a persistent and unreasonable risk of harm that Meta’s marketed privacy features were represented to, but do not, prevent.” This highlights not just the initial breach of privacy but the enduring vulnerability users face, knowing their most private moments may exist in the hands of an unknown, expansive workforce, with no guarantees about how that data is secured or utilized long-term.

Ryan Clarkson, the managing partner at Clarkson Law Firm, encapsulated the gravity of the situation. “Meta made a promise to millions of consumers while knowing full well it could not keep it,” he stated. He further condemned Meta’s actions as a deliberate systemic design rather than an accidental oversight: “While the multi-trillion dollar tech titan attempted to reassure and placate consumers about these smart glasses through ads about privacy and control, workers thousands of miles away have been watching footage from inside people’s bedrooms all along. That is not a technicality or an oversight – that is a system working exactly as designed, and it cannot be allowed to continue.” This strong condemnation suggests that the lawsuit views Meta’s conduct as a fundamental failure of corporate ethics and a deliberate betrayal of consumer trust.

Beyond the immediate legal battle, the revelations have ignited a firestorm of public criticism and mockery, with netizens quickly coining a derogatory new term for Meta’s product: “pervert glasses.” This public sentiment underscores the severe reputational damage Meta faces, potentially undermining future adoption of its ambitious metaverse and wearable technologies. The incident also casts a long shadow over the entire smart wearable industry, prompting broader questions about the transparency of data collection practices, the ethical treatment of data annotators, and the true cost of “convenience” in the age of omnipresent AI.

As the class action lawsuit progresses, it will undoubtedly serve as a critical test case for how legal systems and public opinion will grapple with the complex intersection of advanced AI, personal privacy, and the globalized workforce that powers the digital future. The outcome could significantly influence regulatory frameworks, industry standards, and ultimately, the trust consumers are willing to place in the next generation of smart devices.

More on the glasses: Meta Workers Say They’re Seeing Disturbing Things Through Users’ Smart Glasses