The Foundation for Media Alternatives (FMA) is one of the signatories of a joint statement on Facebook’s Internal Guideline for Content Moderation, and its impact on women and gender violence. This statement is borne out of the UK-based Guardian publication releasing in late May 2017 “The Facebook Files“, leaked internal documents revealing how Facebook moderates content.

This statement was prepared by the global Association for Progressive Communications (APC) which FMA  has been a member since 2002, together with Take Back the Tech! and its partners all over the world.

Read the full statement here:

In late May the Guardian released the Facebook Files, leaked internal documents revealing how the company moderates content. Many of us have long called for more transparency around Facebook’s content moderation so we can better understand gender-based violence that happens on the platform and provide feedback. Although Facebook has made some improvements,1 these documents confirm that it’s often one step forward, one step back, as the platform continues to censor women’s agency, especially women of colour2 and especially in relation to activism, while letting harassment flourish.

Gender-based violence3

People facing online violence contact many of us for assistance, and according to their reports, Facebook and Facebook Messenger are by and large the platforms where the most violations take place. The details of Facebook’s content moderation corroborate precisely what women – be they cishet, LGTBQI – and gender-nonconforming people tell us they experience on the platform. Abuse, particularly nonconsensual image sharing, is rampant, and reports are often rejected with an explanation that the abuse did not violate Facebook’s community guidelines even though these leaked documents show they are clear violations. For many users, this is tantamount to being told the abuse experienced did not take place. We know from experience that human rights defenders are frequently silenced by Facebook itself and face a wide variety of abuse from fellow users, such as the creation of imposter profiles that discredit or defame, photo alteration to create fake intimate images, hate speech, threats and doxing.

Facebook’s policies still fail to reflect a full understanding of the experiences of people who face violence. One glaring issue is the use of the term “credible violence,” akin to the popularly derided term “legitimate rape,” which in itself perpetuates violence. Another problem is the policy to allow posts showing violence against minors, which contributes to the young person’s victimisation and the normalisation of violence.4

Clearly, Facebook’s limited consultation with women’s rights groups and activists has not been meaningful enough to create real change. Often, learnings from these interactions appear to stay with the company representatives present, as we have repeatedly experienced that one arm of the company does not talk to the other. Furthermore, the people designing Facebook’s functions need to hear directly from users and advocates, but engineers are never in the room. We question whether or not Facebook monitors the effectiveness of its responses to gender-based violence and other problems on the site. They certainly are not communicating publicly about what they learn despite the wealth of information they possess.

 

Language and cultural context

Many reports are rejected because of Facebook’s failure to keep up with slang and harassment trends, support all user languages and understand different cultural contexts. Speakers of minority languages may face greater harm related to online abuse because their reports are rejected or the reporting mechanism is not in their language. Facebook’s “revenge porn” guidelines do not reflect an understanding of harm in different contexts. When they require that the image involve sexual activity, they do not seem to consider that the definition of such activity can be in different communities. Images that may be perfectly acceptable in one community may constitute risk for a woman in another community. The platform fails to recognise that what is most important is whether or not the person in the image finds it to be nonconsensual and faces the risk of harm.

This is an area where meaningful engagement with diverse stakeholders could make a significant difference, but we only see success at the individual level, not a systemic level, and this success is dependent on advocates working with Facebook representatives to address individual user reports. In fact, many of us and our partners in different parts of the world are able to contact the company directly to work on an individual case, but this work is not sustainable and is not effective across the board. We should not have to do Facebook’s work for them. They should engage women’s rights groups in the countries served by the platform for policy and protocol consultation that works toward systemic change rather than relying on these groups to apply bandages to a broken system.

 

Protected categories

Taking an intersectional approach to gender, we question the reasoning behind Facebook’s protected categories regarding hate speech, which does not follow an international human rights framework. The guidelines fail to address caste, though if Facebook wrongly includes caste in “social class,” there is no protection for Dalits.5 The company says it has withdrawn its policy to ban posts praising or supporting “violence to resist occupation of an internationally recognized state,” but we know from users that Facebook accounts of Kashmiris who posted photos of Burhan Wani, for example, are still being suspended or closed.

People from any particular country are a protected category, but people migrating from one country to another are only quasi-protected.6 Given the high rate of physical and online violence against migrants, why aren’t they fully protected? Because Facebook considers them a hot topic: “As a quasi-protected category, they will not have the full protections of our hate speech policy because we want to allow people to have broad discussions on migrants and immigration which is a hot topic in upcoming elections.” Further, a protected category combined with an unprotected category cancels out the protection, as though Muslims who are not refugees deserve more protection than Muslims who are refugees. The guide’s alarming example shows that black children (race + age) are not protected but white men (race + gender) are, exemplifying more than convoluted guidelines. These policies ignore the way multiple oppressions intersect.

 

Moderators

It’s clear that Facebook does not have enough moderators, and it is likely that the moderators they employ are not from varied enough backgrounds in terms of characteristics such as economics, race/ethnicity, caste, religion, language and region. Yet a simple increase in moderators cannot overcome misguided policies or inadequate training and management. Still, the leaks did not reveal what we have demanded for years. What are the demographics of the people doing content moderation? What kind of training are they given beyond these internal guidelines? What kind of support do they receive?

We understand that the people Facebook hires to do to the intense work of analysing violent images and abusive text at low pay face secondary trauma. By their own admission, these moderators are struggling, and we find fault with Facebook policies and protocols, not the moderators themselves. When the product puts even their own staff at risk,7 the company must begin to address the problem of violence at a systemic level. Finally, the leaked documents only address human moderation, but algorithmic moderation, while perhaps lessening the burden of human moderators, suffers from the same biases that inform the policies described above.

 

Media lessons

We applaud the Guardian for their reporting on Facebook’s content moderation and for their approach to the issue of online harassment and related violence.8 We hope that other media outlets follow their model in analysing and sharing data about abuse on their platforms, working to protect their writers and covering this issue from a rights-based perspective.

Demands
As organisations and individuals working for women’s human rights online and off, we demand that Facebook:

1. Provide greater transparency and accountability regarding the following:

  • The implementation of content moderation guidelines;
  • The rejection of reports of online abuse and disaggregated data on reports received;
  • The departments and staff responsible for responding to content and privacy complaints.

2. Provide additional training for moderators that addresses cultural and language barriers, power dynamics, and issues such as gender bias and LGBTQ sensitivity.

3. Hire more speakers of languages that are currently under-represented among content moderators.

4. Improve the reporting mechanism so that it meets the following criteria:

  • Legitimacy: the mechanism is viewed as trustworthy and is accountable to those who use it;
  • Accessibility: the mechanism is easily located, used and understood;
  • Predictability: there is a clear and open procedure with indicative time frames, clarity of process and means of monitoring implementation.Equitable: it provides sufficient information and advice to enable individuals to engage with the mechanism on a fair and informed basis;
  • Transparent: individuals are kept informed about the progress of their matter;
  • Rights-respecting: the outcomes and remedies accord with internationally recognised human rights;
  • Source of continuous learning: the mechanism enables the platform to draw on experiences to identify improvements for the mechanism and to prevent future grievances.

5. Increase diversity at all staff levels and adopt the Women’s Empowerment Principles.

Signatories:
Association for Progressive Communications
Association for Women’s Rights in Development (AWID)
Asociación Trinidad
BlueLink
CITAD
Derechos Digitales
Fantsuam Foundation
Foundation for Media Alternatives
GreenNet
Heartmob
Hollaback
Inji Pennu
Japleen Pasricha, Feminism in India
Kéfir
LaborNet
Luchadoras
May First/People Link
Nana Darkoa Sekyiamah
Nica Dumlao
One World Platform
Persatuan Kesedaran Komuniti Selangor (EMPOWER)
Strategic Advocacy for Human Rights
Sula BatsúFor example, see Facebook’s recent announcement of how the platform will better respond to nonconsensual dissemination of intimate images.
See, for example, the case of US writer Ijeoma Oluo, whose account was suspended after she posted screenshots of racist, sexist and often violent harassment she faced on the site, although Facebook did nothing about the harassment. The platform restored Oluo’s account after public outcry, claiming they accidentally suspended it. Such suspensions happen enough to be a worrisome pattern.
Online gender-based violence refers to any act that results in harm due to gender and involves the use of information communication technology. Examples include but are not limited to online harassment, cyberstalking, sextortion, nonconsensual dissemination of intimate images and hate speech.
4 Take Back the Tech!’s “I don’t forward violence” campaign addresses the problem of violence as spectacle and repeat victimisation and could inform improved Facebook policies in this area.
5 Section 4 of the Committee on the Elimination of Racial Discrimination’s General Recommendation 29says that States must “(r) Take measures against any dissemination of ideas of caste superiority and inferiority or which attempt to justify violence, hatred or discrimination against descent-based communities” and “(s) Take strict measures against any incitement to discrimination or violence against the communities, including through the Internet.”
Section 3 of the Committee on the Elimination of Racial Discrimination’s General Recommendation 30compels States to “11. Take steps to address xenophobic attitudes and behaviour towards non-citizens, in particular hate speech and racial violence, and to promote a better understanding of the principle of non-discrimination in respect of the situation of non-citizens” and “12. Take resolute action to counter any tendency to target, stigmatize, stereotype or profile, on the basis of race, colour, descent, and national or ethnic origin, members of “non-citizen” population groups, especially by politicians, officials, educators and the media, on the Internet and other electronic communications networks and in society at large.”
See, for example, Fortune’s story on how moderators’ identities were accidentally shared with suspected terrorists on the platform.
See the Guardian’s “Web we want” series, and in particular, their work on comments.

 

Loading


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *