The Foundation for Media Alternatives (FMA) is one of the signatories of a joint statement on Facebook’s Internal Guideline for Content Moderation, and its impact on women and gender violence. This statement is borne out of the UK-based Guardian publication releasing in late May 2017 “The Facebook Files“, leaked internal documents revealing how Facebook moderates content.
This statement was prepared by the global Association for Progressive Communications (APC) which FMA has been a member since 2002, together with Take Back the Tech! and its partners all over the world.
Read the full statement here:
In late May the Guardian released the Facebook Files, leaked internal documents revealing how the company moderates content. Many of us have long called for more transparency around Facebook’s content moderation so we can better understand gender-based violence that happens on the platform and provide feedback. Although Facebook has made some improvements,1 these documents confirm that it’s often one step forward, one step back, as the platform continues to censor women’s agency, especially women of colour2 and especially in relation to activism, while letting harassment flourish.
Many reports are rejected because of Facebook’s failure to keep up with slang and harassment trends, support all user languages and understand different cultural contexts. Speakers of minority languages may face greater harm related to online abuse because their reports are rejected or the reporting mechanism is not in their language. Facebook’s “revenge porn” guidelines do not reflect an understanding of harm in different contexts. When they require that the image involve sexual activity, they do not seem to consider that the definition of such activity can be in different communities. Images that may be perfectly acceptable in one community may constitute risk for a woman in another community. The platform fails to recognise that what is most important is whether or not the person in the image finds it to be nonconsensual and faces the risk of harm.
People from any particular country are a protected category, but people migrating from one country to another are only quasi-protected.6 Given the high rate of physical and online violence against migrants, why aren’t they fully protected? Because Facebook considers them a hot topic: “As a quasi-protected category, they will not have the full protections of our hate speech policy because we want to allow people to have broad discussions on migrants and immigration which is a hot topic in upcoming elections.” Further, a protected category combined with an unprotected category cancels out the protection, as though Muslims who are not refugees deserve more protection than Muslims who are refugees. The guide’s alarming example shows that black children (race + age) are not protected but white men (race + gender) are, exemplifying more than convoluted guidelines. These policies ignore the way multiple oppressions intersect.
1. Provide greater transparency and accountability regarding the following:
- The implementation of content moderation guidelines;
- The rejection of reports of online abuse and disaggregated data on reports received;
- The departments and staff responsible for responding to content and privacy complaints.
2. Provide additional training for moderators that addresses cultural and language barriers, power dynamics, and issues such as gender bias and LGBTQ sensitivity.
3. Hire more speakers of languages that are currently under-represented among content moderators.
4. Improve the reporting mechanism so that it meets the following criteria:
- Legitimacy: the mechanism is viewed as trustworthy and is accountable to those who use it;
- Accessibility: the mechanism is easily located, used and understood;
- Predictability: there is a clear and open procedure with indicative time frames, clarity of process and means of monitoring implementation.Equitable: it provides sufficient information and advice to enable individuals to engage with the mechanism on a fair and informed basis;
- Transparent: individuals are kept informed about the progress of their matter;
- Rights-respecting: the outcomes and remedies accord with internationally recognised human rights;
- Source of continuous learning: the mechanism enables the platform to draw on experiences to identify improvements for the mechanism and to prevent future grievances.
5. Increase diversity at all staff levels and adopt the Women’s Empowerment Principles.
Association for Women’s Rights in Development (AWID)
Strategic Advocacy for Human Rights
Sula Batsú1 For example, see Facebook’s recent announcement of how the platform will better respond to nonconsensual dissemination of intimate images.
2 See, for example, the case of US writer Ijeoma Oluo, whose account was suspended after she posted screenshots of racist, sexist and often violent harassment she faced on the site, although Facebook did nothing about the harassment. The platform restored Oluo’s account after public outcry, claiming they accidentally suspended it. Such suspensions happen enough to be a worrisome pattern.
3 Online gender-based violence refers to any act that results in harm due to gender and involves the use of information communication technology. Examples include but are not limited to online harassment, cyberstalking, sextortion, nonconsensual dissemination of intimate images and hate speech.
4 Take Back the Tech!’s “I don’t forward violence” campaign addresses the problem of violence as spectacle and repeat victimisation and could inform improved Facebook policies in this area.
5 Section 4 of the Committee on the Elimination of Racial Discrimination’s General Recommendation 29says that States must “(r) Take measures against any dissemination of ideas of caste superiority and inferiority or which attempt to justify violence, hatred or discrimination against descent-based communities” and “(s) Take strict measures against any incitement to discrimination or violence against the communities, including through the Internet.”
6 Section 3 of the Committee on the Elimination of Racial Discrimination’s General Recommendation 30compels States to “11. Take steps to address xenophobic attitudes and behaviour towards non-citizens, in particular hate speech and racial violence, and to promote a better understanding of the principle of non-discrimination in respect of the situation of non-citizens” and “12. Take resolute action to counter any tendency to target, stigmatize, stereotype or profile, on the basis of race, colour, descent, and national or ethnic origin, members of “non-citizen” population groups, especially by politicians, officials, educators and the media, on the Internet and other electronic communications networks and in society at large.”
7 See, for example, Fortune’s story on how moderators’ identities were accidentally shared with suspected terrorists on the platform.
8 See the Guardian’s “Web we want” series, and in particular, their work on comments.