What is happening

Meta published a report that shows how the social media giant has affected human rights in the Israeli-Palestinian conflict in May 2021.

Why it matters

Moderating content in languages ​​other than English is a constant challenge for social media companies. Meta makes changes in response to findings.

Facebook’s parent company, Meta, made errors in content moderation that affected the human rights of Palestinians during an outbreak of violence that occurred in the Gaza Strip in May 2021. report shows released on Thursday.

Meta asked consultancy Business for Social Responsibility to review how the company’s policies and actions affected Palestinians and Israelis after its oversight board, which reviews some of the social media company’s toughest content moderation decisions, recommended that the company to do this.

The report found that Meta’s actions removed or reduced the ability of Palestinians to enjoy their human rights “to freedom of expression, freedom of assembly, political participation and non-discrimination.” It also highlights the ongoing challenges the company faces when it comes to moderating content in languages ​​other than English. Meta owns the world’s largest social network Facebook, photo and video service Instagram and messaging app WhatsApp.

BSR said in the report that it had spoken with affected stakeholders and that many shared their “view that Meta appears to be another powerful entity suppressing their voice.”

The findings outline several content moderation mistakes made by Meta during the Israeli-Palestinian conflict last year. Social media content in Arabic “had greater over-enforcement,” resulting in the company mistakenly removing posts by Palestinians. The BSR also found that “rates of proactive detection of potentially infringing Arabic content are significantly higher than proactive detection rates of potentially infringing Hebrew content.”

Hebrew content suffered “greater under-implementation” because Meta did not have what is known as a “classifier” for “hate speech” in that language. Having a classifier helps the company’s artificial intelligence systems automatically identify posts that are likely to violate its policies. Meta also lost Hebrew-speaking staff and outsourced content moderation.

Meta also mistakenly removed content that did not violate its rules. The human rights impact of “these errors was more serious given a context in which rights such as freedom of expression, freedom of association and safety were of heightened importance, particularly for activists and journalists,” the report said.

The report also pointed out other major errors in content moderation on Meta platforms. For example, Instagram briefly banned #AlAqsa, a hashtag used to mention the Al-Aqsa Mosque in Jerusalem’s Old City. Users also posted hate speech and incitement to violence against Palestinians, Arab Israelis, Jewish Israelis and Jewish communities outside the region. Palestinian journalists also reported that their WhatsApp accounts were blocked.

However, the BSR found no intentional bias within the company or among Meta employees, but found “various instances of unintentional bias where Meta’s policy and practice, combined with broader external dynamics, do lead to disparate human rights impacts on Palestinian and Arabic speaking users.”

Meta said it is making changes to address the issues identified in the report. The company, for example, said it will continue to develop and deploy machine learning classifiers in Hebrew.

“We believe this will greatly improve our ability to deal with situations like this where we see large spikes in content infringement,” said Meta’s director of human rights, Miranda Sisson, in blog post.

https://www.cnet.com/news/social-media/facebook-parent-meta-impacted-palestinians-human-rights-report-says/#ftag=CADf328eec

Previous articleHurdle’s Answer Today, September 23 – Hints and Clues in Songs
Next articleChildren’s Trust of Escambia County to give $6 million to after-school programs