Connect with us

Hi, what are you looking for?

Trending

‘People undressing; watching porn’: Meta staff claim they are reviewing disturbing footage from users’ smart glasses

Investigation finds intimate footage sent to offshore contractors for AI training

CALIFORNIA: Meta Platforms’s AI-powered smart glasses, developed with Ray-Ban, are selling briskly but the devices are also fuelling fresh concerns over privacy and the hidden labour behind artificial intelligence training.

The company sold more than seven million pairs in 2025, a sharp rise from the two million pairs sold across 2023 and 2024 combined, underscoring strong consumer demand for wearable AI.

The glasses allow users to record first-person video using built-in cameras and microphones while analysing the environment through Meta’s AI systems. The technology promises hands-free capture and real-time assistance.

Yet critics warn the devices could deepen surveillance risks, particularly if facial-recognition capabilities are introduced in future software updates.

A separate controversy centres on how the footage is used to train AI models. Recordings captured by the glasses can be sent to offshore contractors for data labelling, a process in which human reviewers watch and annotate footage to help train AI systems.

A joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten found that contractors in Nairobi say they are routinely asked to review highly sensitive recordings.

Workers employed by outsourcing firm Sama described seeing footage of people undressing, using toilets or leaving glasses recording in private spaces.

Some videos reportedly captured bank cards, users watching pornography and even intimate scenes inside bedrooms.

Contractors told the newspapers that they felt compelled to review the footage despite discomfort, fearing that refusal could cost them their jobs.

Meta’s terms of use allow the company to review user interactions with its AI systems, including content shared with its models. The company also advises users not to share sensitive information they do not wish to be retained.

Privacy advocates argue that many users may not fully understand how their recordings are processed after upload. Experts said users effectively lose control of their data once it is incorporated into AI training systems.

After repeated queries from the newspapers, Meta said that when live AI features are used, media is processed according to the company’s terms of service and privacy policy.

The practice of using offshore labour to review data is widespread across the AI industry. Contractors in countries such as Kenya, Colombia and India frequently perform the painstaking work of reviewing and tagging images, videos and text used to train machine-learning systems.

But as AI moves into wearable devices capable of recording everyday life, critics say the ethical stakes and the scale of sensitive data being processed are rising sharply.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Artificial Intelligence (AI)

PM says AI should empower workers, stay inclusive and be human-led

Artificial Intelligence (AI)

How a ten-year-old’s initial investment became a $70 million jackpot

Artificial Intelligence (AI)

MUMBAI: Elevenlabs has appointed Karthik Rajaram as general manager and country head for India, sharpening its push into one of the world’s fastest-growing markets...

Advertising

Broadcaster accused of arrogance and disrespect as fans slam Super 8 promotion

Copyright © 2026 Indian Television Dot Com PVT LTD.