X Media™ Company

Meta Faces New Scrutiny Over Claims Youngsters Are Exposed To Harmful Content

While X has been the focus of scrutiny for its alleged content moderation failures of late, Meta’s also facing its own queries as to how its systems are faring in protecting users, particularly youngsters, as well as the accuracy of its external reporting of such.

According to a newly unsealed complaint against the company, filed on behalf of 33 states, Meta has repeatedly misrepresented the performance of its moderation teams via its Community Standards Enforcement Reports, which new findings suggest are not reflective of Meta’s own internal data on violations.

As reported by Business Insider:

[Meta’s] Community Standards Enforcement Reports tout low rates of community standards violations on its platforms, but exclude key data from user experience surveys that evidence much higher rates of user encounters with harmful content. For example, Meta says that for every 10,000 content views on its platforms only 10 or 11 would contain hate speech. But the complaint says an internal user survey from Meta, known as the Tracking Reach of Integrity Problems Survey, reported an average of 19.3% of users on Instagram and 17.6% of users on Facebook reported witnessing hate speech or discrimination on the platforms.”

In this sense, Meta’s seemingly using a law of averages to water down such incidents, by taking in a smaller amount of reports and dividing them by its massive user base. But actual user feedback indicates that such exposure is much higher, so while the broader data suggests very low rates, the user experience, evidently, is different.

The complaint alleges that Meta knows this, yet it’s presented these alternative stats publicly as a means to reduce scrutiny, and provide a false sense of safety in its apps and its user safety approach.

In a potentially even more disturbing element of the same complaint, Meta has also reportedly received more than 1.1 million reports of users under the age of 13 accessing Instagram since early 2019, yet it’s disabled “only a fraction of those accounts”.

The allegations have been laid out as part of a federal lawsuit filed last month in the U.S. District Court for the Northern District of California. If Meta’s found to be in violation of privacy laws as a result of these claims, it could face huge fines, and come under further scrutiny around its protection and moderation measures, particularly in relation to younger user access.

Depending on the results, that could have a major impact on Meta’s business, while it may also lead to more accurate insight into the actual rates of exposure and potential harm within Meta’s apps.

In response, Meta says that the complaint mischaracterizes its work by “using selective quotes and cherry-picked documents”.

It’s another challenge for Meta’s team, which could put the spotlight back and Zuck and Co., in regards to effective moderation and exposure, while it may also lead to the implementation of even tougher regulations around young users and data access.

That, potentially could eventually move the U.S. more into line with more restrictive E.U. rules.

In Europe, the new Digital Services Act (D.S.A.) includes a range of provisions designed to protect younger users, including a ban on collecting personal data for advertising purposes. Similar restrictions could result from this new U.S. push, though it remains to be seen whether the complaint will move ahead, and how Meta will look to counter such.

Though really, it’s no surprise that so many youngsters are accessing Instagram at such high rates.

Last year, a report from Common Sense Media found that 38% of kids aged between 8 and 12 were using social media on a daily basis, a number that’s been steadily increasing over time. And while Meta has sought to implement better age detection and security measures, many kids are still accessing adult versions of each app, by simply putting in a different year of birth in many cases.

Of course, there is also an onus on parents to monitor their child’s screen time, and ensure that they’re not logging into apps that they shouldn’t. But if an investigation does indeed show that Meta has knowingly allowed such, that could lead to a range of new complications, for Meta and the social media sector more broadly.

It’ll be interesting to see where the complaint leads, and what further insight we get into Meta’s reporting and protection measures as a result.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Tailored for You

Get a Quote

Take the first step towards achieving your business goals, let us help you stand out and grow your business.

Important Notice from X Media™

If you don’t receive your sales quote in your inbox within 45 minutes, please check your junk or spam folder for an email from team@xmediacompany.com, as it may have been filtered there by mistake.