Facebook’s automatic symbol removal formula is flawed, oversight board says

In 2019, Meta (born Facebook) CEO Mark Zuckerberg announced the creation of an independent content oversight board. The council of 23 experts and civic leaders tasked with reviewing questionable moderation resolutions following tension from critics and users over complex and likely asymmetric policies. According to Zuckerberg at the time, the rulings of his so-called Supreme Court of Social Media “will be binding, even if I or someone at Facebook disagrees. “Since then, the council has issued a series of rulings on issues such as hate speech, misinformation and nudity. But the group’s most recent resolution is perhaps its ultimate condemnation of internal strategy and Meta’s ability to solve the ongoing moderation problem.

Earlier in the day, the Oversight Board announced the results of an appeal proceeding related to the earlier removal of a Colombian political cartoon depicting police brutality. In an online post this morning, the board explained why Facebook deserves to remove banks’ representation from Facebook’s AI. -Assisted media matching service, a formula that uses AI analytics to identify and remove flagged photographs that violate the website’s content policies. They also explained why the entire existing formula has massive flaws.

[Related: Meta may only abortion-related DMs, advocates say. ]

“Meta was to upload this cartoon to its Media Matching Service bank, which led to a large and disproportionate removal of the platform’s image, adding the content posted through the user in this case,” the Oversight Board wrote, before warning that “Although 215 users appealed those deletions and 98% of those calls were successful, Meta still hasn’t gotten rid of this bank’s caricature until the case reaches the board. “

The Board goes on to say that Facebook’s existing automated content removal systems can, and already have, amplify wrong decisions made by human employees. This is problematic given the domino effect of such elections. “Top when, as in this case, the content is composed of political discourse that criticizes state actors,” he warns. Greater transparency and accountability.

[Related: Meta chatbot repeats biases and incorrect user information. ]

Unfortunately, this is where the “Supreme Court” of social media differs from that of Washington, D. C. : it is an independent committee, Facebook has no legal responsibility to adhere to those suggestions. Still, by making the oversight committee’s perspectives public, Zuckerberg and Meta executives may at least face increased pressure to continue reforming their moderation strategies.

Articles may include associated links that allow us to make a percentage profit from any acquisitions made.

Registration or use of this implies acceptance of our terms of use.

© 2022 Recurring. All rights reserved.

Leave a Comment

Your email address will not be published. Required fields are marked *