Above all else, the foundation of a good moderation team is familiarity. By knowing your fellow moderators better, what they do, or where they live, you’ll relate to them better. You’ll get to know people’s strengths and weaknesses, learn to understand them better, and get a feeling for when they need help, and vice versa. Though you all may be in different time zones and have diverse backgrounds you’re all working towards the same goal which is keeping the community you all care for safe. Who knows- you might find that you share a lot of the same interests along the way, make great new friends, or deepen existing friendships during your time together as moderators.
Here are a few basic things you should do to familiarize with each other:
A moderation team needs a clear structure and a unified understanding of server moderation, which has already been covered in a previous segment: 302 - Developing moderator guidelines. Now we’re going to expand on how to utilize each and every single moderator's abilities further. A moderation team can range from a few members in a small server to a huge team with 30 or more staff depending on the server size. The bigger your community gets the more the team needs to be organized. While they are all moderators, it doesn’t mean they all do the same job.
Some of your moderators, especially experienced moderators, are likely to be in more administrative positions. They usually stay further away from general day-to-day channel moderation while newer moderators are focused on watching conversations and enforcing the server rules.
If you do have one of these larger mod teams, consider delegating certain moderators to tasks and responsibilities that they’d be best suited for, rather than having a jack of all trades, master of none situation. This allows to divide the team into smaller sub-teams that talk to each other more frequently in designated channels regarding their specific mod duties.
Here are a few examples of sub-teams that are common within larger communities:
Moderators that primarily contribute to the community by enforcing rules, watching conversations, engaging members, solving member to member conflicts and showing moderation presence. The same type of moderators could also exist for Voice Channels, but that is mostly for very large communities.
Moderators that are extremely familiar with permissions, bots, programming, etc. Ideally, they aren’t just able to operate bots you’re using, but also maintain them. A custom bot tailored to your community is always something to consider. Having a bot hosted “in-house” by a moderator within your team adds an additional layer of security. The Bot Team is very valuable in making new and creative ideas possible and especially in automating day-to-day processes.
Most servers host events, from community run events to events run by staff members. Event Supervisors watch over the community members hosting events, watching out for new events, while being the general point of communication in hosting and announcing them.
These are ways of how moderators can be utilized better by giving them a designated job depending on your team's size, and also giving them the ability to dive into certain topics of moderation more in-depth within your community, which overall makes managing and coordinating a team as a whole easier.
As server size and the number of moderators increases, the harder it becomes to hear every voice and opinion. As a team, decisions need to be made together, they need to be consistent, equitable, and take into account as many different opinions as possible.
It’s important to establish a system, especially when making big decisions. Often, there are decisions that need to be done right at the very moment. For example, when someone posts offensive content. In most cases, a moderator will act on their perception of the rules and punish offenders accordingly. At that very moment, the offending content has to be removed, leaving little to no time to gather a few staff members and make a decision together. This is where consistency comes into play. The more your moderators share equal knowledge and the same mindset, the more consistent moderation actions get. This is why it’s important to have moderator guidelines and a clear structure.
It’s very important to give every moderator freedom so they don’t have to ask every time before they can take action, but it’s also important to hear out as many opinions on any major server changes as possible, if time allows it.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
Over time, a moderation team grows. They grow in many ways, in their abilities, in the number of moderators, but also grow together as a team. Every new moderation team will face challenges they need to overcome together and every already established team will face new situations that they have to adapt to and deal with. It’s important to give a moderator something to work towards. Mods should look forward to opportunities that will strengthen their capabilities as a moderator and strengthen the team’s performance as a whole.
You should let moderators know that they have the potential for growth in their future as a moderator. It can be something like specializing into specific topics of moderation, like introducing them into managing or setting up bots. Perhaps over time they will progress to a senior moderator and focus more on the administrative side of things.
The Discord Mod Academy can be a valuable resource in encouraging moderator growth as well. While they may be familiar with some of the concepts in the Discord Moderation Academy, no moderator can know everything and these articles have the potential to further refine their moderation knowledge and enhance their abilities.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Moderators are direct representatives of your community and as such should be a reflection of what an ideal community member looks like. Many things tie into showing this professionalism, ranging from how moderators chat with members in public to consistency in moderator actions.
The presence of a moderator should never make people uncomfortable - there needs to be a fine line between “I can chat with this moderator like with any other member” and “This is a moderator and I need to take what they're saying seriously”.
Here are a few attributes in what makes a moderation team look and be professional:
Your team should share positivity, engage conversation, show respect for others and maintain a professional look. Make it clear to your moderation team that they are a highly visible and perhaps incredibly scrutinized group within your community- and to conduct themselves appropriately with those aspects in mind.
As with all group efforts, there is a possibility for friction to occur within a moderation team. This is undoubtedly something that you’re all going to have to address at some point in your mod team’s lifespan. It’s very important to deal with these situations as soon as they come up. Many people don't feel comfortable dealing with conflict directly as it can be uncomfortable. However, getting to any potential problems before they become serious can prevent more severe issues from cropping up in the future. The goal of a problem-solving process is to make a moderation team more conflict-friendly.
As a general problem solving process, you should:
With that in mind, there are also situations where you’d want to exercise more discretion. Something that might prompt this is when a moderator makes a mistake.
It can be both uncomfortable and embarrassing to be “called out” for something, so often enough the best option is to speak to someone privately if you think they’ve made a mistake. After the moderator understands the situation and acknowledges the mistake, the problem can be talked about with the rest of the team, mainly to prevent situations like these in the future.
Still, sometimes there are situations where problems can’t be solved and things get more and more heated, and in the end separating is unavoidable. Moderators leave on their own or have to be removed. Your team members should always be given the option to take a break from moderation- especially to avoid burnout. You can learn more about how to deal with moderator burnout in course 311.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
Taking a moment to look back at the history and progression of your community and your mod team can be a useful way to evaluate where your team is at and where it needs to be. The time frame can be of your choosing, but some common intervals can be monthly, quarterly, or half year.
You don’t always get to talk with your moderators every day - most of us volunteer, moderating as a hobby, having our own lives while living scattered all over the world. With all of that going on it can be hard to find the time or space to discuss things that you might feel are lacking or could be changed regarding the server, and that’s why reviewing is important.
A Community Review can be done in many ways. For most, a channel asking a few basic questions, like how the community has developed since the last review, how activity and growth has changed, is a good way to start. Most importantly, you want to hear how the moderation team is doing. Talk about mistakes that have happened recently, how they can be prevented, and review some of the moderator actions that have been taken. A review allows everyone to share their thoughts, see what everyone is up to, and deal with more long term issues. It also allows giving your moderators feedback on their performance.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.
The essence of managing a moderation team is to be open-minded, communicate and to listen to each other. Endeavor to manage decisions and confront problems in an understanding, calm, and professional way. Give moderators the opportunity to grow, designated tasks, but also time to review, break and rest. Remember, you’re in this together!
Take the Discord Moderator Exam!Take the Exam