Meta’s all-out A.I. push has taken a hit, with the company forced to scale back its A.I. plans in Europe amid concerns around how it’s looking to fuel its A.I. models with user data, from both Facebook and Instagram.
As reported by Reuters:
“Meta will not launch its Meta A.I. models in Europe for now after the Irish privacy regulator told it to delay its plan to harness data from Facebook and Instagram users. The move by Meta came after complaints and a call by advocacy group NOYB to data protection authorities in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain to act against the company.”
At issue is the fact that Meta is using public posts on Facebook and Instagram to feed its A.I. systems, which may violate E.U. data usage regulations. Meta has acknowledged that it is using public posts to power its Llama models, but says that it’s not using audience-restricted updates, nor private messages, which it believes aligns with the parameters of its user privacy agreements.
Meta outlined these specifics, in relation to European users, in a blog post just last month:
“We use publicly available online and licensed information to train AI at Meta, as well as the information that people have shared publicly on Meta’s products and services. This information includes things like public posts or public photos and their captions. In the future, we may also use the information people share when interacting with our generative AI features, like Meta AI, or with a business, to develop and improve our AI products. We don’t use the content of your private messages with friends and family to train our AIs.”
Meta has been working to meet E.U. concerns around its A.I. models, and has been informing E.U. users, via in-app alerts, as to how their data may be used in this context.
But now, that work is on hold till E.U. regulators have had a chance to assess these latest concerns, and how they align with its G.D.P.R. regulations.
It’s a difficult area, because while Meta can argue that it’s within its rights to use this data, under its broad-reaching user agreements, many would be unaware that their public posts are being added into Meta’s A.I. data pool.
Is that a concern?
Well, if you’re a creator, and you’re looking to reach as large an audience as possible on Facebook and I.G, then you’re going to post publicly, but that means that any text or visual elements that you share in this context are then fair game for Meta to repurpose in its A.I. models.
So when you see an image generated by Meta A.I. that looks a lot like yours, it probably is derivative of your work.
Really, this is a part of the broader concern around A.I. models, and how they harvest user data on the web. Technically, Meta is correct, that it has outlined such within its agreements, but E.U. officials are likely to call for more specific permissions, which will see European users prompted to explicitly allow their content to be re-used by Meta’s A.I. models, or not.
I would think that this is the most likely outcome, but right now, it means that the roll-out of Meta’s A.I. tools in Europe will be delayed a little longer.