Meta faces EU scrutiny over AI data use without consent
Advocacy group NOYB criticised Meta's plan to use personal data for training its AI models without user consent, urging privacy authorities across Europe to intervene.
The group called on national privacy watchdogs to address recent changes in Meta's privacy policy, effective June 26, which would enable the use of years of personal posts, images, and online tracking data for Facebook’s AI technology.
In response to the impending policy changes, 11 complaints were filed, requesting urgent action from data protection authorities in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain.
Meta countered the criticism by referring to a May 22 blog post that stated it uses publicly available and licensed information, as well as publicly shared user data, for AI training.
However, a message to Facebook users indicated that Meta could process information about individuals without accounts if they appear in images or posts shared by users.
A Meta spokesperson asserted that their approach complies with privacy laws and is consistent with practices of other tech companies like Google and Open AI.
The advocacy group has previously filed complaints against Meta and other major tech companies for alleged violations of the EU's General Data Protection Regulation (GDPR), which can result in fines up to 4% of a company's global turnover.
Meta has defended its use of user data for developing generative AI models, citing legitimate interest.
Max Schrems, founder of NOYB, stated that the European Court of Justice (CJEU) had ruled against Meta's claim of legitimate interest for advertising purposes in 2021.
He argued that Meta is ignoring these rulings by using similar arguments for AI training and criticised the complex opt-out process for users.
Schrems emphasised that the law requires Meta to obtain opt-in consent from users, rather than providing a hidden and misleading opt-out option.