Hello, all. I posted this on Wikimedia-L and Meta today and wanted to share with you as well. Apologies if you’re sick of seeing it! <3 For those who don’t know, I’ll note that a Steward opened an RFC just to see what thoughts there might be. Requests for comment/Global ban requirements - Meta
Hello all,
My name is Maggie Dennis, and I am the Vice President of the Community Resilience and Sustainability group at the Wikimedia Foundation. Among the teams I oversee is the Trust & Safety unit. This team ensures that our projects are compliant with applicable law and also explores ways of keeping the Wikimedia community safe and works to minimize exposure to harm for volunteer and reader communities.
I’m reaching out today to discuss a potential gap in volunteer community policy that my teams observed while evaluating and acting on a Trust & Safety investigation. We wanted to bring this up in case volunteer community members would like to consider if this is indeed a concern that you wish to address. Before getting to that, let me give you a little context on the case.
As many of you know, we are not usually able to talk about office actions due to legal limitations. However, I am able to speak a little more to this situation since the majority of the information around this case is already public. Today, the Foundation issued four global bans and three conduct warnings following an investigation into the activities of individuals found to be linked to the “WikiZédia” network. Based on our investigation, we concluded that this network attempted to use Wikimedia platforms for a targeted disinformation campaign engineered to influence the outcome of a national election. The banned users’ actions, which took place over an eight-month period until their community-backed blocks in February 2022, violated several of our Terms of Use, which resulted in the Foundation’s office action.
Many of our projects have excellent policies and systems in place to handle such situations. Certainly French Wikipedia was on top of this. We greatly admire and appreciate the leadership of community members in identifying and confronting this situation locally. Wikimedians who work directly with content are often the first to see evidence of such campaigns, and there are many volunteers with much experience in identifying problem behaviours and stopping them. By the time Trust & Safety was asked to investigate by some of those volunteers, much of the work on the local level had already been done.
However, one of the questions Trust & Safety asks itself in any case investigation (disinformation or behavioral) is whether appropriate community options exist that meet the needs of the movement and community members across it. In this case, we wondered if the current community processes support cases where individuals are behaving in ways that suggest they will never be good faith contributors on any project.
To go more into depth on what I mean: It is not uncommon for users who create problems on one project to move to another, and for some communities it is even regarded as a potential path to rehabilitation. Community applied global bans are, under the existing policy, “exclusively applied where multiple independent communities have previously elected to ban a user for a pattern of abuse.” (emphasis in original) If an individual is here as part of a concerted group effort to undermine our very mission, should it be easier for community members to assess global banning before they carry that behavior from one project to another?
Foundation policies do permit banning individuals for behavior on one project and sometimes require it, especially where Terms of Use violations are egregious and threats of or acts of violence are involved. This is a gap where we can step in. Our goal is to support communities where we are needed and where we can.
However, we wanted to call out the question of whether community global bans should be allowed in cases where the behavior is severe but limited to one project, in case volunteer community members thought it worth discussing the existing community ban policy. Especially in cases of disinformation , these are not always the kinds of situations governed by our Universal Code of Conduct (UCoC), which speaks to the way users treat each other but not the content.
If there is a desire for the Foundation to support a conversation about making such a change to community global ban policy, I hope we would be able to do so in the near future, as our Trust & Safety Policy team is dedicated to supporting the evolution of community policy as well as Foundation policy. However, I’m not suggesting that the Foundation needs to be involved at all. Trust & Safety Policy is a small team, currently very busy with the UCoC, and if they are not needed, there is no reason that this conversation can’t happen spontaneously. We will support if needed, but really just wanted to bring this question up for your consideration.
In this case, again, we do want to thank the French Wikipedia contributors who protected their communities and our collective readers by identifying and addressing the issue first as well as bringing the matter to us.
We encourage those who feel unsafe on Wikimedia projects to use local community processes or, absent such, to contact the Wikimedia Foundation for assistance. The Foundation and the community will work, together or in parallel, to enhance the safety of all users whenever necessary with whatever means we can. To contact the Trust & Safety team about a safety issue, you can write to ca@wikimedia.org. To contact the Trust & Safety Disinformation team about a specific disinformation issue, you can write to drt@wikimedia.org.
Best regards,
Maggie