Like almost every community on the web, this community has a set of Rules and is moderated by a team of volunteers who make decisions and
enforce the rules. All of the moderators share the same goal - to make this community a fun,
positive space for all of our members.
This report is written, by the moderation team, to give transparency into our decision making and to
show the reason why content was removed from our platforms. The aim of the report is to help create
an open dialogue, where users can give feedback to the moderators on
actions which they believe were incorrectly moderated or where rules were inconsistently applied. It
hopes to achieve this aim by providing candid access to data about moderation actions.
Content moderation is needed to prevent the spread of illegal and harmful content - and that is one
part of the role our moderators take in the community. However, in the context of communities there
is more than just stopping the spread of harmful content. Making sure everyone in our community
feels safe and welcome is important to us. That's why we need moderators to watch over everything
that's going on. All it can take is one group, one person, or one action, to ruin everything for
everyone else.
We use a mixture of Human (manual) and Automated filter moderation. Filters are useful for capturing
content that cannot be misinterpreted and is unacceptable in all contexts. Human moderators are good
at moderating in situations where there is nuance or contextual knowledge attached to a decision.
If you have an interest in the moderation actions of the community we would implore you to at least
read over the summary of the report. If you have stumbled upon this report and have no idea about
the dexbonus community then it's probably not necessary reading.
Data Summary
This report analysed data from September 2020 until September 2021. The data was primarily recorded
from Twitch, as few - if any - moderation actions occur on our Discord server. The outline of the
analysis and findings of the report are outlined below in wording which should be understandable by
someone with a high school education. If you have questions about the report you can Contact Us and we will be happy to answer your questions.
Data Analysed
Our dataset covers a one year timeframe. Starting in September 2020
and ending in September 2021. The year of data contains 4,239 moderation actions;
consisting of deleted messages, timeouts and bans.
We analysed 262 bans, 1,549 timeouts and 2,428 deleted messages. This
data was captured as part of the routine logging which Twitch provides to moderation
teams. We also separately analysed 131 unban requests managed through the Twitch
platform, it is not safe to assume that these unban requests relate to the 262 bans
analysed as a ban appeal can be raised at any time.
General Themes
The moderators of the community must reserve the right to remove any
content for any reason; however the majority of actions fall within the need to
remove content which violates our rules.
In 2021 Twitch added the ability for users to see the
reason they were timed out for, since our senior team was made aware of
that change we have asked moderators to add reasons to all timeouts. Once moderators
were instructed to add reasons to all timeouts, they added reasons in almost all
situations. There was one specific example where this wasn't followed which will be
discussed later.
We also implemented a new automated moderation filter in 2021, to
help us protect against hate raids. During the training and setup of this, there
were some false positives, these are detailed below.
The most common reasons, for actions with reasons, for content being
removed were:
Automated Moderation Filter. This is primarily content which
caught false positives in our filters whilst they were being set up. However,
there are some situations where the content was removed correctly.
Content deemed to be spoilers or backseat gaming. This is
situations where content was removed for spoiling part of a game, movie or book;
or it is where content was removed because someone was backseat gaming, a
situation where you tell the streamer how to play the game.
Automated Emote Spam Filter. This is where content was
auto-filtered by our bot because there was more than the allowed number of
emotes used.
Non-english or symbols in chat.This is where content was
removed because there was non-english characters in the message or there was use
of disallowed symbols.
Information
The situations where punishment reasons were not routinely
given are broadly categorised as, timeouts in lieu of a deleted
message and actions taken during a period of
urgency.
It was necessary during a hate raid, which occurred in 2021,
for one of our senior moderators to timeout many people without providing
reasons. All of the accounts which were impacted were later banned by us and
by Twitch.
We will continue not to provide reasons in
the situations listed above, however we are broadly moving away from using 1
second timeouts to remove a message and instead using deletions for first
warnings. Moderators have been instucted of these changes and are, with the
occasional exception, following this change. Timeouts are still being used
for those who continue to break our rules following a deleted message.
Distinct Ban Reasons
The count of each reason; including parts of a
multi-reason ban.
This analysis looks primarily at ban reasons. These bans are considered permanent and we
have
not excluded bans where a successful appeal has taken place since. The only excluded
bans
are those which were completed as part of testing or as a mistake; for example,
Alexratman
banning his alt account "notalexratman".
The largest group of accounts which were banned were the Spambots. This was the accounts
which were related to the Twitch Hate Raid which was previously mentioned. We have
separated
this from either Rule 1 or Rule 2 to show that this is an exceptional case. Likewise we
have
split accounts which were suspended by Twitch since the ban - these are usually sitewide
banned for breaking Twitch's Community Guidelines. However, as we were not sure for the
exact reason we have chosen to omit guessing.
It is not surprising to the moderators that Rule 1 (Abuse) and Rule 2 (Spam) are the most
common reasons for a community member to be banned. These rules include some of the most
extreme negative behaviours; so it is to be expected that users may be banned for that
reason.
More interesting was the fact that Rule 11 was the next most likely ban reason.
Rule 11 relates to topics which have been considered disruptive or inappropriate so we
ask community members not to speak about them, repeatedly continuing to discuss these
topics will get a user banned. The senior team looked into whether that rule is
correctly
applied; and found that it was usually supplementary to another ban reason; normally
Rule 1 or Rule 2.
Despite the perception from some members of the community that a lot of people are banned
for Backseat Gaming or Spoilers (Rule 7 and 8); these were among the least used reasons,
with less than 3% of the bans between them. Instead, we believe that in the majority
of cases a user will change their following a timeout or deleted message and not
repeatedly break Rule 7 or Rule 8. Suggesting that there may be further education
necessary for new users in chat.
Join The Discord
just now
Join over 6,000 other purritos in our Discord! Get access to tonnes of emotes, Sub only
channels,
Custom bot commands and games, pet selfies and more!