So, you’ve set up your community and established some rules to serve as a starting point. The next step to getting your server’s safety practices into place is enforcing those rules. Automated moderation can play a large role in the process of rule enforcement and keeping your community safe even when there aren’t always eyes to do it for you.
This article will cover general and specific implementations and configurations of automoderation, both with the aid of tools Discord has readily available, as well as with tools provided by third party bots. Before reading on, be sure to familiarize yourself with the following terms in order to best make use of this article:
‘Raid’ ‘Raider’ - A raid is where a large number of users will join a community with the express intention of causing issues for the community. A raider is an account engaging in this activity.
‘Alt’ ‘Alt account’ - An alt is a throwaway account owned by a Discord user. In the context of raids, these alts are made en masse to engage in raiding.
‘Self-bot’ - A self bot is an account that’s being controlled via custom code or tools. This is against Discord’s TOS. In the context of raids and moderation, these accounts are automated to spam, bypass filters or engage in other annoying activities.
Auto Moderation is integral to many communities on Discord, especially those of any notable size. There are many valid reasons for this, some of which you may find apply to your community as well. The security that auto moderation can provide can give your users a much better experience in your community, make the lives of your moderators easier and prevent malicious users from doing damage to your community or even joining your community.
If you’re a well established community, you’ll likely have a moderation team in place. You may wonder, why should I use auto moderation? I already have moderators! Auto moderation isn’t a replacement for manual moderation, rather, it serves to enrich it. Your moderation team can continue to make informed decisions within your community while auto moderation serves to make that process easier for them by responding to common issues at any time more quickly than a real-life moderator can.
Different communities will warrant varying levels of auto moderation. It’s important to be able to classify your community and consider what level of auto moderation is most suitable to your community’s needs. Keep in mind that Discord does impose some additional guidelines depending on how you designate your community. Below are different kinds of communities and their recommended auto moderation systems:
If you run a Discord community with limited invites where every new member is known, auto moderation won’t be a critical function unless you have a significantly larger member count. It’s recommended to have at least some auto moderation however, namely text filters, anti-spam, or Discord’s AutoMod keyword filters.
If you run a Discord community that is Discoverable or has public invites where new members can come from just about anywhere, it’s strongly recommended to have anti-spam and text filters or Discord’s AutoMod keyword filters in place. Additionally, you should be implementing some level of member verification to facilitate the server onboarding process. If your community is large, with several thousand members, anti-raid functionality may become necessary. Remember, auto moderation is configurable to your rules, as strict or loose as they may be, so keep this principle in mind when deciding what level of automation works best for you.
Verified and Partnered communities
If your Discord community is Verified or Partnered, you will need to adhere to additional guidelines to maintain that status. Auto moderation is recommended for these communities in order to feel confident that you can succinctly and effectively enforce these guidelines at all times so consider using anti-spam and text filters or Discord’s AutoMod keyword filters. If you have a Vanity URL or your community is Discoverable, anti-raid is a must-have in order to protect your community from malicious actors.
Some of the most powerful tools in auto moderation come with your community and are built directly into Discord. Located under the Server Settings tab, you will find the Moderation and Content Moderation settings. This page houses some of the strongest safety features that Discord has to natively offer. These settings can help secure your Discord community without the elaborate setup of a third party bot involved. The individual settings will be detailed below.
AutoMod is a new content moderation feature as of 2022, allowing those with the “Manage Server” and “Administrator” permissions to set up keyword filters that can automatically trigger moderation actions such as blocking messages containing specific keywords from being sent and logging flagged messages as alerts for you to review.
This feature has a wide variety of uses within the realm of auto moderation, allowing mods to automatically log malicious messages and protect community members from harm and exposure to undesirable words like slurs or severe profanity. AutoMod’s abilities also extend to messages within threads and text-in-voice channels, giving moderation teams peace of mind that they have AutoMod’s coverage in these message surfaces without having to worry about adding more manual moderation work by enabling these valuable features.
Setting up AutoMod and its keyword filters is very straightforward. First, make sure your server has Communities enabled. Then, navigate to your server’s settings and click the Content Moderation tab. From there, you’ll find AutoMod and can start setting up keyword filters. You can set up one “Commonly Flagged Words” filter, along with up to 3 custom keyword filters that allow you to enter a maximum of 1,000 keywords each, for a total of four keyword filters.
When inserting keywords, you should separate each word with a comma like so: Bad, words, go, here. Matches for keywords are exact and aware of whitespace. For example, the keyword “Test Filter” will be triggered by “test filter” but not “testfilter” or “test”. Do note that keywords also ignore capitalization.
To have AutoMod filter messages containing words that partially match your keywords, which is helpful for preventing users from circumventing your filters, you can modify your keywords with the asterisk (*) wildcard character. This works as follows:
Be careful with wildcards so as to not have AutoMod incorrectly flag words that are acceptable and commonly used!
Keywords can be configured with the following automatic responses:
This response will prevent a message containing a keyword from being sent entirely. Users will be notified with an ephemeral message when this happens, informing them the community has blocked the message from being sent, but not which keyword triggered the block. As the message is prevented from being sent entirely, the volume of messages doesn’t matter. Discord will seamlessly block all messages matching the keyword filter from being sent. This is especially effective for countering raids with repeated specific messages.
Send an alert
This response will send a special kind of message to a logging channel that you will specify upon setup. This message will preview what the full caught message would’ve looked like, including the full content. It also shows a pair of buttons at the bottom of the message, ⛨ Actions and Report Issues. The actions button will bring up a user context menu, allowing you to use any permissions you have to kick, ban or time out the member. The message also displays the channel the message was attempted to be sent in and the keyword filter that was triggered by the message. In the future, some auto-moderation bots may be able to detect these messages and action users accordingly.
Time out user
This response will automatically apply a time out penalty to a user, preventing them from interacting in the server for the duration of the penalty. Affected users are unable to send messages, react to messages, join voice channels or video calls during their timeout period. Keep in mind that they are able to see messages being sent during this period. To remove a timeout penalty, Moderators and Admins can right-click on any offending user’s name to bring up their Profile Context Menu and select “Remove Timeout.”
AutoMod is a very powerful tool that you can configure to meet your community’s needs. For example, you may want to use three keyword filters; one to just block messages, one to just send alerts for messages, and one to do both. High harm keywords, such as slurs and other extreme language should have AutoMod’s “block message” and “send alerts” responses enabled. This will allow your moderation team to take action against undesirable messages and the users behind them. Low harm keywords or commonly spammed phrases that aren’t against TOS or notably malicious on their own can have AutoMod’s “Block message” response enabled. This will prevent the messages being sent without spamming logs with alerts for them.
You can also use AutoMod’s keyword filters during a raid to catch spammed keywords and prevent the raid from causing lasting damage. Finally, you can consider having AutoMod send you alerts for more subjective content that requires a closer look from your moderation team, rather than having them being blocked entirely. This will allow your moderation team to investigate lower harm keywords with context to ensure there’s nothing malicious going on. This is useful for keywords that can be commonly misrepresented, or sent in a non-malicious context.
AutoMod’s keyword filters comes equipped with three predefined wordlists in a preset filter designed to provide communities with baseline protection against commonly flagged words.There are three predefined categories of words available: Insults and Slurs, Sexual Content, and Severe Profanity. These wordlists will all share one rule, meaning they’ll all have the same response configured. This list is maintained by Discord, so it’s recommended that Partnered or Verified communities enable it to ensure conformance with the relevant codes of conduct and guidelines for membership in these programs.
Both AutoMod’s commonly flagged word filters and custom filters allow for exemptions in the form of roles and channels. Anyone with these defined roles, or sending messages within the defined channels, will not trigger responses from AutoMod. This is notably useful for allowing moderators to bypass filters, or allowing higher trust users to send more unrestricted messages. As an example, you could prevent new users from sending Discord invites with a keyword filter of: *discord.gg/*, *discord.com/invites/* and then give an exemption to moderators or users who have a certain role, allowing them to send Discord invites. This could also be used to only allow sharing Discord invites in a specific channel. There’s a lot of potential use cases for exemptions! Members with the Manage Server and Administrator permissions will always be exempt from all Content Moderation filters. Bots and webhooks are also exempt.
None - This turns off verification for your community, meaning anyone can join and immediately interact with your community. This is typically not recommended for public communities as anyone with malicious intent can immediately join and wreak havoc.
Low - This requires people joining your community to have a verified email which can help protect your community from the laziest of malicious users while keeping everything simple for well-meaning users. This would be a good setting for a small, private community.
Medium - This requires the user to have a verified email address and for their account to be at least 5 minutes old. This further protects your community by introducing a blocker for people creating accounts solely to cause problems. This would be a good setting for a moderately sized community or small public community.
High - This includes the same protections as both medium and low verification levels but also adds a 10 minute barrier between someone joining your community and being able to interact. This can give you and anyone else responsible for keeping things clean in your community time to respond to ‘raids’, or large numbers of malicious users joining at once. For legitimate users, you can encourage them to do something with this 10 minute time period such as read the rules and familiarize themselves with informational channels to pass the time until the waiting period is over. This would be a good setting for a large public community.
Highest - This requires a joining user to have a verified phone number in addition to the above requirements. This setting can be bypassed by robust ‘raiders’, but it takes additional effort. This would be a good setting for a private community where security is tantamount, or a public community with custom verification. This requirement is one many normal Discord users won’t fill, by choice or inability. It’s worth noting that Discord’s phone verification disallows VoIP numbers to be abused.
Not everyone on the internet is sharing content with the best intentions in mind. Discord provides a robust system to scan images and embeds to make sure inappropriate images don’t end up in your community. There are varying levels of scrutiny to the explicit media content filter which are:
Don’t scan any media content - Nothing sent in your community will go through Discord’s automagical image filter. This would be a good setting for a small, private community where only people you trust can post images, videos etc.
Scan media content from users without a role - Self explanatory, this works well to stop new users from filling your community with unsavoury imagery. When combined with the proper verification methods, this would be a good setting for a moderately sized private or public community.
Scan media content from all members - This setting makes sure everyone, regardless of their roles, isn’t posting unsavoury things in your community. In general, we recommend this setting for ALL public facing communities.
Once you’ve decided on the base level of auto moderation you want for your community, it’s time to look at the extra levels of auto moderation bots can bring to the table! The next few sections are going to detail the ways in which a bot can moderate.
If you want to keep your chats clean and clear of certain words, phrases, spam, mentions and everything else that can be misused by malicious users you’re going to need a little help from a robotic friend or two. Examples of bots that are freely available are referenced below. If you decide to use several bots, you may need to juggle several moderation systems.
When choosing a bot for auto moderation, you should also consider their capabilities for manual moderation (things like managing mutes, warns etc.). Find a bot with an infraction/punishment system you and the rest of your moderator team find to be the most appropriate. All of the bots listed in this article have a manual moderation system.
The main and most pivotal forms of auto moderation are:
Each of these subsets of auto moderation will be detailed below along with recommended configurations depending on your community.
Bots seen in this guide:
It’s important your auto moderation bot(s) of choice are adopting the cutting edge of Discord API features, as this will allow them to provide better capabilities and integrate more powerfully with Discord. Slash commands are especially important as you’re able to configure which commands are usable on which bot on a case by case basis for each slash command. This will allow you to maintain very detailed moderation permissions for your moderation team. Bots that support more recent API features are generally also considered to be more actively developed, and thus more reliable in regards to reacting to new threat vectors as well as able to adapt to new features on Discord. A severely outdated bot could react insufficiently to a high-harm situation.
Slash Command Permissions
As mentioned above, one of the more recent features is Slash Commands. Slash commands are configurable per-command, per-role, and per-channel. This allows you to designate moderation commands solely to your moderation team without relying on permissions on the bot’s side to work perfectly. This is relevant because there have been documented examples in the past of permissions being bypassed on a moderation bot’s permission checking, allowing normal users to execute moderation commands.
One of the most common forms of auto moderation is anti-spam, a type of filter that can detect and prevent various kinds of spam. Depending on what bot(s) you’re using, this comes with various levels of configurability.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance **Giselle combines these elements into one filter
Anti-spam is integral to running a large private community, or a public community. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord and there are multiple types of spam a user can engage in. Some of the most common forms are listed in the table above. These types of spam messages are also very typical of raids, especially Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same content with the intent of disrupting your community.
There are subsets of this spam that many anti-spam filters will be able to catch. For example, if any of the following: Mentions, Links, Invites, Emoji and Newline Text are spammed repeatedly in one message, or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to have as you may wish to punish more or less harshly depending on the spam. Notably, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, ten links in five seconds, they will be punished to some degree. This could be ten links in one message, or one link in ten messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly and a bot can fall behind. There are rate limits in place to stop bots from harming communities that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your community from raids, please read on to the Anti-Raid section of this article.
Text filters allow you to control the types of words and/or links that people are allowed to put in your community. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**Users can bulk-input a YML config
***Only the templates may be used, custom filters cannot be made
A text filter is a must for a well moderated community. It’s strongly recommended you use a bot that can filter text based on a banlist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word banlist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a banlist to block ALL Discord invites.
A Banned Words filter is integral to running a public community, especially for Partnered, Community, or Verified servers who have additional content guidelines they must meet that a Banned Words filter can help with.
Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your community, regardless of context. For example, racial slurs are generally unacceptable in almost all communities, regardless of context. Banned word filters often won’t account for context with an explicit banlist. For this reason, it’s also important that a robust filter contains allowlisting options. For example, if you add ‘cat’ to your filter and someone says ‘catch’, they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your community as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to communities where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels dedicated to sharing that content. This can allow a community to remove links with an appropriate reprimand without treating that misstep with the same gravity one would someone who used a slur.
Allow/ban-listing and templates for links are also a good idea to have. While many communities will use catch-all filters to make sure links stay in specific channels, some links will always be inherently unsavory. Being able to filter specific links is a good feature- with preset filters (like the google filter provided by YAGPDB) coming in very handy for protecting your user base without requiring intricate setup on your behalf. However, it is recommended you configure a custom filter as a supplement, to ensure specific slurs, words, etc. that break the rules of your community, aren’t being said.
Invite filtering is equally important in large or public communities where users will attempt to raid, scam or otherwise assault your community with links with the intention of manipulating your user base or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized instantly and dealt with more harshly. Some bots may also allow by-community white/banlisting allowing you to control which communities are approved to share invites to and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, communities are shared. These communities should be added to an invite allowlist to prevent their deletion.
Built-in suspicious link and file detection
Discord also implements a native filter on links and files, though this filter is entirely client-side and doesn’t prevent malicious links or files being sent. It does, however, warn users who attempt to click suspicious links or download suspicious files (executables, archives etc.) and prevents known malicious links from being clicked at all. While this doesn’t remove offending content, and shouldn’t be relied on as auto moderation, it does prevent some cracks in your auto moderation from harming users.
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your community. Protecting your community from these raids can come in various forms. One method involves gating your server using a method detailed elsewhere in the DMA.
*Unconfigurable, triggers raid prevention based on user joins and damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavory messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your community by preventing raiding users from accessing your community in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your community by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users in the community.
Raid anti-spam is an anti-spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.
It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public community.
Messages aren’t the only way potential evildoers can introduce unwanted content to your community. They can also manipulate their Discord username or Nickname to be abusive. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
Username filtering is less important than other forms of auto moderation. When choosing which bot(s) to use for your auto moderation needs, this should typically be a later priority, since users with malicious usernames can just be nicknamed in order to hide their actual username.
So far, we’ve covered general auto moderation bots with a wide toolset. However, there are some specialized bots that only cover one specific facet of auto moderation and execute it especially well. A few examples and descriptions are below:
This bot detects raids as they happen globally, banning raiders from your community. This is especially notable as it’ll ban detected raiders from raids in other communities it’s in as they join your community, making it significantly more effective than other anti-raid solutions that only pay attention to your community.
Fish is designed to counter scamming links and accounts, targeting patterns in joining users to prevent DM raids (Like normal raids, but members are directly messaged instead). These DM raids are typically phishing scams, which Fish also filters, deleting known phishing sites.
Both of these bots are highly specialized link and file moderation bots, effectively filtering adult sites, scamming sites and other categories of sites as defined by your moderation team.
When choosing a bot for auto moderation you should ensure it has an infraction/punishment system you and your mod team are comfortable with as well as its features being what’s best suited for your community. Consider testing out several bots and their compatibility with Discord’s built-in auto moderation features to find what work best for your server’s needs. You should also keep in mind that the list of bots in this article is not comprehensive - you can consider bots not listed here. The world of Discord moderation bots is vast and fascinating, and we encourage you to do your own research!
For the largest of communities, it’s recommended you employ everything Discord has to offer. You should use the High or Highest Verification level, all of Discord’s AutoMod keyword filters and a robust moderation bot like Gearbot or Gaius. You should seriously consider additional bots like Fish, Beemo and Safelink/Crosslink to aid in keeping your users safe and have detailed Content Moderation filters. At this scale, you should seriously consider premium, self hosted, or custom moderation bots to meet the unique demands of your community.
It’s recommended you use a bot with a robust and diverse toolset, while simultaneously utilizing AutoMod’s commonly flagged word filters. You should use the High Verification level to aid in preventing raids. If raiding isn’t a large concern for your community, Gearbot and Giselle are viable options. Your largest concerns in a community of this size is going to be anti-spam and text filters meaning robust keyword filters are also highly recommended, with user filters as a good bonus. Beemo is generally recommended for any servers of this size. At this scale a self hosted, custom, or premium bot may also be a viable option, but such bots aren’t covered in this article.
It’s recommended you use Fire, Gearbot, Bulbbot, AutoModerator or Giselle. Mee6 and Dyno are also viable options, however as they’re very large bots and have been known to experience outages, leaving your community unprotected for large amounts of time. At this community size, you’re likely not going to be largely concerned about anti-raid with anti-spam and text filters being your main focus. You’ll likely be able to get by just using AutoMod’s keyword filters and commonly flagged words lists provided by Discord. User filters, at this size, are largely unneeded and your Verification Level shouldn’t need to be any higher than Medium.
If your community is small or private, the likelihood of malicious users joining to wreak havoc is rather low. As such, you can choose a bot with general moderation features you like the most and use that for auto moderation. Any of the bots listed in this article should serve this purpose. At this scale, you should be able to rely solely on AutoMod’s keyword filters. Your Verification Level is largely up to you at this scale depending on where you anticipate member growth coming from, with Medium being default recommended.
First, make sure Mee6 is in the communities you wish to configure it for. Then log into its online dashboard (https://mee6.xyz/dashboard/), navigate to the community(s), then plugins and enable the ‘Moderator’ plugin. Within the settings of this plugin are all the auto moderation options.
First, make sure Dyno is in the communities you wish to configure it for. Then log into its online dashboard (https://dyno.gg/account), navigate to the community(s), then the ‘Modules’ tab. Within this tab, navigate to ‘Automod’ and you will find all the auto moderation options.
First, make sure Giselle is in the communities you wish to configure it for. Then, look at its documentation (https://docs.gisellebot.com/) for full details on how to configure auto moderation for your community.
First, make sure Gaius is in the communities you wish to configure it for. Then, look at its documentation (https://automoderator.app/docs/setup/) for full details on how to configure auto moderation for your community.
First, make sure Fire is in the communities) you wish to configure it for. Then, look at its documentation (https://getfire.bot/commands) for full details on how to configure auto moderation for your community.
First, make sure Bulbbot is in the communities you wish to configure it for. Then, look at its documentation (https://docs.bulbbot.rocks/getting-started/) for full details on how to configure auto moderation for your community.
First, make sure Gearbot is in the communities you wish to configure it for. Then, look at its documentation (https://gearbot.rocks/docs) for full details on how to configure auto moderation for your community.