who did i work with?

what did we do?

we were on a mission to we enable Wikimedia communities to automatically prevent or revert obvious vandalism, so that moderators would have more time to spend on other activities.

why this project?

a substantial number of edits are made to Wikimedia projects which should unambiguously be undone. patrollers and administrators have to spend a lot of time manually reviewing and reverting these edits, which contributes to a feeling - particularly in larger wikis - that there is an overwhelming amount of work requiring attention compared to the number of active moderators. our team wanted to reduce these burdens, freeing up moderator time to work on other tasks.

our idea

our idea was to build an automated moderation system, leveraging new Revert Risk machine learning models, to empower communities with configuration options to automatically revert edits which are likely to be damaging.

hypotheses

prototyping

i wanted to give our volunteer community the opportunity to test the Revert Risk machine learning model first, so that they could understand it and gives us a thumbs up for proceeding forward with the project.

i designed a low-cost, light-weight prototype using Google sheets [add link] and distributed the prototypes to over ten users across multiple languages. in the Wikimedia world, we ensure that our designs are multilingual when possible, and test across languages when possible, because our Movement is global.