Brazil orders Meta to stop using user data to train AI

The decision was issued by Brazil's National Data Protection Authority, which warned it would impose $8,800 a day


AFP July 03, 2024

Brazil on Tuesday demanded Meta cease taking users' data to train its generative AI models, a move the US tech giant called "a setback."

The decision was issued by Brazil's National Data Protection Authority, which warned it would impose a daily fine of $50,000 Brazilian reais (about $8,800) as long as the parent company of Facebook, Instagram and WhatsApp was out of compliance.

The agency cited the company's new privacy policy, updated on June 26, which outlines terms regarding "the use of personal data for training purposes of generative AI systems," according to a government statement announcing the move.

The authority called its ban a "preventative measure" that was made "due to the imminent risk of serious and irreparable or difficult to repair damage to the fundamental rights of the affected data subjects."

A spokesperson for Meta said the company was disappointed in the decision.

"AI training is not something unique to our services and we are more transparent than many players in this industry who have used public content to train their models and products," Meta said in a statement sent to AFP.

Read: Meta to restrict news content from Facebook in Australia

"This is a setback for innovation and competitiveness in AI development, and delays the arrival of AI benefits for people in Brazil," it added.

Brazil has about 109 million active Facebook users and 113 million Instagram users, according to data firm Statista.

Recent advancements in generative AI have prompted warnings from some experts and academics who advocate for regulation of the emergent technology.

In June, Meta suspended its new AI-friendly privacy policy in the European Union after 11 countries complained.

COMMENTS (1)

e | 4 months ago | Reply e
Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ