There are a few “genres” of things you tend to see on a daily basis, depending on your community. Keep in mind that every situation is unique. Because of this, you may find it difficult to understand what exactly you should do in each different scenario. If you find yourself in one of those situations, here are some good points to follow for nearly every situation:
In some scenarios, steps 2 and 3 can be interchangeable or simultaneous. Sometimes the context and motives are immediately apparent with the action, such as a user’s intent to cause disruption by spamming gore in your server. You can see right away that no additional context is needed and that their motives are demonstrated clearly, so you can go right to proportional response. In this case, the user is typically banned and reported to Discord’s Trust & Safety team.
There are two questions you should ask yourself whenever something catches your attention:
These questions are rather straightforward, but sometimes the answer may be a little unclear. Typically a member’s disruption in the chat will catch your eye. This disruption may be a variety of different things: they might be explicitly breaking your server’s defined rules, treating other members harshly, bringing the quality of your chat down through their behavior, or perhaps just a small yet visible disagreement. If you confirm that something like this is happening, you can then ask yourself the next question: Do I need to intervene?
When a member begins to disrupt your server, this member may need intervention from a moderator to prevent the situation from escalating. However, while it may be your first instinct to step in as a moderator when something happens, take a step back and evaluate if that’s necessary. If two members have a disagreement on a subject, this doesn’t always mean that the situation will become heated and require your intervention. Disagreements are common not only on Discord but in any sort of open forum platform where everyone can voice their opinion on whatever anyone else says. Disagreements are a natural part of conversation and can encourage healthy discourse. As long as a disagreement does not turn into a heated argument, disagreements tend to be mostly benign.
There are, however, also cases that will require a moderator’s intervention. If a situation seems to be escalating into harassment rather than simple disagreement, or if members are posting things that break your server’s rules, you can determine that it’s appropriate for you to intervene.
After you’ve confirmed to yourself that something needs your attention, you should begin the next step of gathering information.
Before we get into that though, it’s good to note that there are certain scenarios in which you would entirely skip this step and immediately move on to the third step- involving de-escalation or handing down a corrective action. Situations like these are ones in which you can tell right away that additional context is unnecessary and that something needs to be done, typically immediately. Situations like this could be:
In cases like these, additional deliberation is unnecessary as the violations are obvious. For more ambiguous cases however, you should consider the context of the situation and the motives of the user.
Context is the surrounding circumstances of each situation. This includes the events that happened before the incident, the interaction history of those involved, the infraction history of those involved, and even how long they’ve been in your server.
Consider the scenario where a user uses a racial slur. Some may think that the user should immediately have corrective action taken against them, but that may not be the case. This user could have been explaining an issue they run into in the real world, or they could be asking someone else not to use the word. With additional information at hand, it may become evident that the transgression is less severe than initially thought, or perhaps even a non-violation at all. The exact action taken will depend on your rules, but it’s clear that understanding all of the relevant information is key to ensuring you take appropriate and proportional action.
Another thing to consider when you first approach a scenario is the underlying motives of those involved. What are they trying to achieve? What is their goal by doing what they’re doing?
For example, if two users are trading mild insults, it is possible to interpret this as friendly banter if you know these two people are good friends. Conversely, if you know these people dislike each other, then their motives may be less than friendly. Knowing your members well will therefore help you better to assess when a situation that needs intervention is occurring.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
Now that you’ve confirmed both the context of the situation and the underlying motives of the individual(s), you can decide what action you should take. Unless you deem the conduct of a user to be notably severe, a typical initial response is to de-escalate or defuse the situation. This means you attempt to solve the situation by verbal communication rather than moderation action, such as an official warning, a mute, or a ban.
When it comes to de-escalation, you should remember that the members involved are typically going to be annoyed or upset at that moment due to the situation at hand. If you approach the situation from a stern and strict stance immediately, you could upset the members further and fan the flames, so to speak.
An example of verbally mitigating an argument that's turning too heated would be to say “Hey folks! While we appreciate discussion and think disagreement is healthy for promoting productive discourse, we think this particular discussion may have gone a little too far. Could we please change the subject and talk about something else? Thanks!”
Now, consider what this statement aims to accomplish. It starts positive and friendly, thanking the users for their participation on the server. Showing this appreciation can help to calm the members involved. The message then states the reason for the intervention. Doing this respectfully is important, because if you aren’t respectful to your members, they aren’t going to be respectful to you. This effect is amplified on community servers where you are going to be interacting with the same active members on a regular basis.
After clarifying the reason for intervention, you should make the request on what you expect to happen going forward. In this situation, this is asking the members to move on. It’s important to note that phrasing the request as a question rather than an order is a deliberate choice. The message thanks them one more time as a way to end it on a positive note. Your goal here is to defuse the situation so things don’t get worse. Keeping all of these things in mind when you phrase your communications is important.
De-escalation is a skill that you may struggle with initially. Being comfortable with it requires many different interactions and experiences with many different moderation scenarios. Don’t be discouraged if you can’t do it immediately. You’re going to run into scenarios where you simply aren’t able to effectively defuse the situation and may have to rely on a corrective action instead. It is still a very good idea to generally approach these situations without the intent of punishing someone. Not every situation needs to end with a punishment. The one skill that can take you from a good mod to an outstanding mod is the ability to defuse situations swiftly and efficiently.
If you’ve tried to defuse a situation and they fail to listen, or continue to escalate, your next step is deciding what other effective means you have to end the situation at hand. So, what exactly should you do?
Most servers tend to follow a proportional response system. This means that members tend to receive corrective action proportional to the acts they commit. If we think about our situation where an argument got too heated and de-escalation techniques were ineffective, we may want to consider restricting the privileges of the members involved. This serves as a punishment that is appropriate for the scenario while also allowing them the time they need to cool down and move on. Other examples of where a mute may be appropriate are minor spam, they are clearly inebriated, if a user is a little too harsh, or if someone needs time to cool off. It’s important to note that an official warning could also be given as an alternative which is typically done through a moderation bot.
After you apply this mute, it is worth looking at the history of the members involved in the incident to determine if the mute is all you need. If these members have a history of being problematic in chat, you may consider removing them from your community.
It’s important to remember that the goal of the moderation team is to promote healthy activity in our communities. With this in mind, it’s also good to remember that moderators and members are ultimately a part of that same community and that you don’t want to intimidate the people that rely on you. If you react too harshly, you run the risk of establishing a negative relationship between you and your community. People in your community should feel safe approaching you about an issue. Just like in the real world, they want to be confident that if it ever comes to them being reported, they’ll be treated fairly. If you’re scared of being banned from the server because of a small disagreement, you tend to not want to engage with the server to begin with.
Inversely, if you don’t react strongly enough, you allow those who wish to disrupt your community more time and opportunity to do so and you may not be trusted by your community to handle situations.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
After you’ve dealt with a scenario, it may be appropriate to take action in other places as well. Questions may arise from other members, your staff may need to know about this incident in the future, or tensions may remain high where the incident occurred.
It is important to log this incident with the other members of your staff for future reference. There are many ways to do this, whether that be sending a message in your private staff channel, logging it within a bot, or maybe posting about it in your moderation log. These all provide you with a means to go back and check the history of these users and their run-ins with staff. It is important that you’re diligent about keeping these records. Other staff might not know about the incident and similarly you may not be aware of other incidents handled by your fellow staff members. If you find yourself in a situation where the problem user causes issues in the future, you will be able to quickly access the infraction history. This will allow you to appropriately adjust your response to the situation and emphasizes the importance of context when taking action.
Tensions may linger where the incident occurred. Other members may see what happened and feel second-hand discomfort or anger depending on the situation. It may be necessary to resolve this tension by thanking the other members of chat for their patience and/or bringing it to your attention and stating that it was solved. This has the side effect of answering where the users went and why it happened.
For example, if two users had a heated argument in your chat and you ended up muting them, third-party observers may see this argument in chat and react negatively to the comments made during the argument. You can resolve this by stating something along the lines of “Sorry about that everyone. Situation resolved, users will be muted for a time to cool down.” This statement has the effect of stating what you did and why you did it. Acknowledging the situation as well as detailing that it’s been handled is an effective means to ease tensions and bring healthy discussion back to your chat. Keep in mind though, if the conversation has already moved on by the time you’ve dealt with the incident, this step may not be necessary. Bringing the conversation back to this issue may have the opposite effect and remind people of the uncomfortable situation.
You should now be able to confidently approach each situation and determine what the best way to handle it is. That being said, this is just a portion of your foundation. First hand experience is invaluable and necessary in order to be more efficient and fluent in moderating.
One of the most undervalued tools in moderation is your voice as a person in a position of power and your ability to defuse a situation, so don’t be afraid of trying to mitigate a situation first. If you’re still in doubt about what to do, never be afraid to ask your other staff members, especially those who may be more experienced.
Remember: Situation identification, information gathering, initial response, and situation closure. Keeping these steps in mind will help you stay on track to becoming a better mod and better community lead.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.
Take the Discord Moderator Exam!Take the Exam