Wednesday, April 06, 2022

Online abuse study

CornwallLive story  

In a statement, Cindy Southworth, Meta’s head of women’s safety, said: “While we disagree with many of the CCDH’s conclusions, we do agree that the harassment of women is unacceptable.

“That’s why we don’t allow gender-based hate or any threat of sexual violence, and last year we announced stronger protections for female public figures.

“Messages from people you don’t follow go to a separate request inbox where you can either block or report the sender, or you can turn off message requests altogether.

“Calls from people you don’t know only go through if you accept their message request and we offer a way to filter abusive messages so you never have to see them.”

But this isn't doing anything to address the actual problem, which is the people sending abusive messages and rape threats to women in the public eye. The problem isn't these women seeing the abuse. (I mean, obviously they don't want to see it, but it's the wrong direction to be tackling it from, surely?)

Instead, could there be some sort of filtering before sending to a stranger? ie. anything containing obvious swear-words/misogynistic/racist words gets rejected immediately and you get a message to the effect of "This message content does not pass our standards for civil communications. If you intend to contact other customers of ours, please revise what you want to say and ensure it fits with our standards as set out here [link to suitable policy]. Repeated flagging of your account for inappropriate messages will lead to a suspension or ban."

When sending to a stranger, perhaps you get a caution as you press send:  "Are you sure you want to send this message? We want to build constructive dialogues and healthy online communities, if this is not in this spirit, then do not send. If the content of your message breaks our community standards [policy here], you risk losing your account with us and potentially legal action." or something? 

No comments: