Skip to main content

Question of the Day

Question of the day · 2026-03-05 ·

One question per day to look beyond the headlines.

Why did Meta’s Ray-Ban glasses privacy fight turn on human reviewers in Nairobi, not encryption or on-device AI?

Take-away Privacy risk concentrated at the AI-training pipeline: exporting “on-device” footage for human labeling creates a new access boundary where encryption claims don’t apply.

Meta’s privacy controversy around its Ray-Ban AI smart glasses stemmed from the practice of sending user footage to human reviewers in Nairobi, Kenya, rather than relying solely on encryption or on-device AI. Investigative reports revealed that contractors in Kenya were reviewing intimate footage from the glasses, which was a significant concern for consumers and led to privacy violations claims [1], [4]. The human review process involves labeling audio and video content to improve Meta’s AI systems, which reportedly includes private and sensitive moments [2], [6]. Meta's privacy stance asserts that the data remains on the device unless shared. However, critics argue that the company's practices are inconsistent with claims that the glasses are 'designed for privacy' [3], [5]. This practice raises concerns about exposure to privacy risks such as embarrassing content and identity theft [3], [4].

Sources · 2026-03-06