Internet

Facebook again fails to detect hate speech in advertisements

SAN FRANCISCO — The test couldn’t have been much easier -, and Facebook still failed.

Facebook and parent company Meta flopped again in a test to test how well they could detect clearly violent hate speech in ads sent to the platform by nonprofits Global Witness and Foxglove.

The hate speech was aimed at Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation is “literally fueling ethnic violence,” as she said in her 2021 congressional testimony. In March, Global Witness conducted a similar test involving hate speech in Myanmar, failing to detect Facebook.

The group created 12 text-based ads that used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups: the Amhara, the Oromo, and the Tigrayans. The ads were not actually published on Facebook. Facebook’s systems approved the ads for publication, just like the Myanmar ads.

However, this time the group informed Meta about the undetected violations. The company said the ads should not have been approved, pointing to its work “to build our capacity to capture hateful and incendiary content in the most widely spoken languages, including Amharic.”

A week after hearing from Meta, Global Witness submitted two more ads for approval, again containing blatant hate speech. The two advertisements, again in written text in Amharic, the most widely used language in Ethiopia, were approved.

Meta did not respond to multiple messages for comment this week.

“We picked the worst cases we could think of,” said Rosie Sharpe, a campaigner at Global Witness. “The cases that should be easiest for Facebook to detect. It wasn’t code language. It wasn’t dog whistles. It was explicit statements saying that these types of people are not human or that these types of people should be starved.”

Meta has consistently declined to say how many content moderators it has in countries where English is not the primary language. This includes moderators in Ethiopia, Myanmar, and other regions where material posted on the company’s platforms have been linked to real-world violence.

Facebook

In November, Meta said it had removed a post by Ethiopia’s prime minister urging citizens to revolt and “bury” rival Tigray forces threatening the country’s capital.

In the now-deleted post, Abiy said the “obligation to die for Ethiopia belongs to all of us”. He called on citizens to mobilize “by holding a weapon or capacity”.

However, Abiy has continued to post on the platform, where he has 4.1 million followers. The US and others have warned Ethiopia against “dehumanizing rhetoric” after the prime minister described Tigray forces as “cancer” and “weed” in July 2021.

“If ads calling for genocide in Ethiopia repeatedly come across the internet from Facebook — even after the problem has been flagged up with Facebook — there’s only one possible conclusion: no one is home,” said Rosa Curling, director of Foxglove, a London-based established legal nonprofit that partnered with Global Witness in its investigation: “Years after the Myanmar genocide, it’s clear that Facebook hasn’t learned its lesson.”

Albert L. Davis

My name is Albert, and I am a full time blogger by passion. I write about things that I am passionate about, and I have been lucky enough to find a career that fits me so well. I love being able to come home from work and spend my day doing what I want to do. I enjoy sharing tips and tricks to help others live a more balanced life, and I am grateful every day for the chance to share my knowledge with people all over the world.

Related Articles

Back to top button