Someone in Nairobi Is Watching Your Bathroom Through Meta's AI Glasses
A Swedish newspaper investigation found that Meta's Ray-Ban AI glasses send user footage — including bathroom visits, undressing, and bank card details — to data annotators at Sama in Nairobi, Kenya. Face-blurring regularly fails. EU regulators are circling. And the same week this broke, Motorola announced a partnership with GrapheneOS to build privacy-first hardware by 2027.
LTT WAN Show — Meta Ray-Ban privacy, GrapheneOS on Motorola, Apple bombs
Key Points
•A joint investigation by Swedish newspapers found Meta's Ray-Ban AI glasses send user footage to data annotators at Sama in Nairobi, Kenya — including bathroom visits, undressing, sex, and exposed bank card details
•Workers said the automated face-blurring system regularly fails, especially in poor lighting — meaning real faces and bodies are clearly visible
•Meta's AI terms permit human review but the privacy policy never specifies that "review" might mean workers in Kenya watching your private moments
•EU regulators flagged potential GDPR violations; Meta's internal memo revealed the facial recognition feature was deliberately timed during a "dynamic political environment"
•Motorola announced a long-term partnership with GrapheneOS — the privacy-first Android fork — with support for 2027 flagship devices
The Camera You're Wearing to Dinner Is Streaming to a Data Center
Here's the thing Meta doesn't put in the marketing copy for its Ray-Ban smart glasses: when you ask the AI assistant what's in front of you, that moment doesn't stay between you and an algorithm. It goes to a server. It gets processed. And in a lot of cases, a human being in Nairobi, Kenya looks at it.
A joint investigation published last week by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten interviewed over thirty employees at Sama, a data annotation subcontractor Meta uses to train its AI systems. What those workers described is about as far from "built with your privacy in mind" — which is literally how Meta markets the glasses — as you can get [1].
Workers said they regularly see footage of users in the bathroom, undressing, having sex, or unknowingly filming their partners in private moments. Some described footage of people watching pornography while wearing the glasses. Others flagged exposed bank card details captured as the wearer typed PINs or scanned cards [1][3]. "We see everything — from living rooms to naked bodies," one worker told the Swedish outlets. "Meta has that type of content in its databases." [3]
This isn't a theoretical privacy risk. It's an actual person, in an actual building in Nairobi, watching video of your actual life.
Linus Sebastian covered both stories on the WAN Show, calling privacy "dead" but noting tools like GrapheneOS are becoming viable alternatives
Meta Said the Faces Would Be Blurred. They Often Aren't.
When the Swedish journalists started digging into exactly what safeguards exist, Meta pointed to automatic face-blurring — an algorithm that anonymizes footage before it reaches human annotators. Former Meta employees confirmed this is the intended process. The problem: it doesn't reliably work.
Data annotators in Kenya told investigators that the blurring fails regularly, especially in difficult lighting conditions. "The algorithms sometimes miss," a former Meta employee confirmed. "In certain conditions, faces and bodies become visible." [3] The workers see those faces. They keep going. They don't have the option to refuse — according to interviews, workers who asked too many questions about the content they were seeing risked losing their jobs [2].
Independent testing cited in the investigation also showed that most of the glasses' AI features require cloud connectivity and cannot function offline. Which means every time you're using the assistant, data leaves the device. This directly contradicts what several European retailers told customers — that the glasses process data locally — which adds yet another layer to the transparency problem [2].
The UK's Information Commissioner's Office called the findings "concerning" and said it was writing to Meta [1]. EU lawyers flagged potential GDPR violations because there is no EU adequacy decision covering Kenya's data protection standards — meaning transferring EU user data to Sama in Nairobi may lack a proper legal basis under the regulation [1].
Linus and Luke breaking down the Meta Ray-Ban investigation on the WAN Show. Credit: Linus Tech Tips / YouTube
Meta's Response Is a Masterclass in Not Answering the Question
After two months of questions from the Swedish journalists, Meta responded with a statement that described how data moves from the glasses to the mobile app and pointed to its AI terms of service. That's it. No direct answer on where the images come from, no details on how long recordings are stored, no information on who has access beyond "it may be subject to human review" [2].
The terms of service, by the way, are buried behind multiple links. Swedish retailers gave customers contradictory answers about how data is handled. And Meta's own documentation is clear that users cannot opt out of the server-side processing that makes the AI features work at all [3].
There's also the "Name Tag" feature to consider. Meta is rolling out facial recognition for the Ray-Bans — a feature that can identify people the wearer is looking at. An internal memo, obtained by the New York Times, revealed that Meta deliberately timed the feature's launch for what it internally described as a "dynamic political environment" — a moment when civil society groups would be distracted by other concerns and less likely to mount organized opposition [1].
That's not a company that made a privacy mistake. That's a company that made a calculated decision.
Linus called it out directly on the WAN Show: "Every time we've talked about what the killer app would be for smart glasses, it was the creepy dystopian facial recognition. And here we are." He added: "Meta moved past being a social network and became an advertising monstrosity. Many, many years ago." [1]
The Week This Story Dropped, GrapheneOS Announced It's Coming to Motorola
The timing is almost too on the nose. On the same week that Meta's AI glasses privacy scandal blew up, Motorola announced at Mobile World Congress 2026 in Barcelona that it has entered a long-term partnership with GrapheneOS — the security and privacy-focused Android fork that has spent its entire existence exclusive to Google Pixel devices [4][5].
GrapheneOS is not a casual alternative to stock Android. It's a hardened operating system built around significantly improved sandboxing, exploit mitigations, and a permission model that actually means something. It's what security researchers, journalists working in hostile environments, privacy advocates, and genuinely paranoid technologists run when they need to know their phone isn't quietly sending data somewhere [4].
Until now, you had to buy a Google Pixel to run it. That requirement existed because most Android hardware doesn't meet GrapheneOS's strict security standards — verified boot with proper downgrade protection, hardware memory tagging, long-term firmware support. The deal with Motorola changes that. GrapheneOS confirmed the first supported devices will be 2027 flagship hardware — the Motorola Signature, razr fold, and razr ultra — built from the ground up to meet those requirements [4][5].
"It will initially be flagships, since those will be the 2027 devices meeting our requirements including hardware memory tagging, but it can expand over time," GrapheneOS posted on X after the announcement [5]. Motorola is also working to integrate select GrapheneOS features and concepts into its standard firmware — meaning even users who don't install the full OS will get some of the privacy architecture baked in [5].
The GrapheneOS + Motorola announcement is a bigger deal than it sounds. GrapheneOS's hardware requirements aren't arbitrary. Hardware memory tagging (ARM MTE) is a chip-level feature that lets the OS catch memory safety bugs — the class of vulnerability responsible for a huge percentage of real-world exploits. The fact that Motorola is building 2027 hardware to meet that standard means these phones will be fundamentally more resistant to the kind of attacks that compromise stock Android. This isn't marketing. It's engineering.
What This Week Actually Tells You About Where Tech Is Headed
These two stories connect in a way that's easy to miss if you're just reading the headlines separately.
Meta's Ray-Ban investigation is a concrete example of the privacy contract that exists between users and tech companies that sell AI-powered wearables: your data is the product, human review is part of the pipeline, and the promises in the marketing copy have nothing to do with what's in the terms of service. The glasses are cool. The AI features are genuinely useful. The privacy architecture is, according to EU regulators, potentially illegal in Europe — and the company knew it and shipped anyway [3].
GrapheneOS going to Motorola is the counter-current. There's real demand — proven, growing demand — for hardware and software that takes privacy seriously as an engineering constraint rather than a marketing claim. GrapheneOS doesn't promise privacy. It implements it, at the kernel level, with verified boot and hardware memory tagging and proper sandboxing, on hardware built to spec [4].
Linus said it plainly on the WAN Show: "There are things you can do. Like not wearing Meta glasses." He went further: "Whether it's ratings.com or whether it's HouseFresh — quality data and quality information is just kind of doomed. No one actually goes the whole way." [1]
He's right that the incentive structure is broken. And he's right that it feels grim. But the GrapheneOS story is evidence that the incentive structure isn't the only thing in play. Some people — and now, apparently, some phone manufacturers — are building the harder thing because the harder thing is the right thing.
The Ray-Bans look good. The privacy policy reads fine until you know where to look. And somewhere in Nairobi, a data annotator is reviewing footage that a person in Europe or the US recorded by accident, on a device marketed as private, through a process the user never knew existed.
That's not a bug. It's the model.
Be angry at the AI giants that are scraping, that are stealing the hard work that real actual human people with blood flowing through their veins are doing. And just putting it in an AI summary and profiting off of it, benefiting from it, and not paying for it.
— Linus Sebastian, WAN Show, March 2026
On this page
Web · https://www.theregister.com/2026/03/02/motorola_grapheneos/