313: How to Moderate Voice Channels

Voice channels are a great way for your server members to engage with each other and can be utilized in many different ways. You might have a system for users to find other users to play games with or maybe you just have some general chat channels to hang out and listen to music. Regardless of the types of voice channels you have, you're going to need to moderate these voice channels, which can reveal some interesting challenges. In this article, we'll identify these challenges and go over some things that can help you to overcome them.

Why Moderate Voice Channels?

Before we talk about how to moderate voice channels, it's important to consider why it's necessary. In some servers voice channels are rarely used, or only used for events where a moderator is present. If this is the case for your server, moderating voice channels is likely much more straightforward. This article will focus on how to moderate voice channels in servers where voice channels are used frequently and without moderators present. However, the information in this article is still useful to any server with voice channels. Although these situations may be rarer for your server, it doesn't hurt to be prepared if they do indeed occur.

What to Look Out For

Many of the moderation issues you will encounter while moderating voice channels will be the spoken equivalent of situations you would encounter in text channels. However, you can't keep records of what is said in voice channels without recording everything with a bot, which is not easy to do, nor is it something your server members will likely be comfortable with. This means that if no moderator is present in the voice channel at the time of a user being troublesome, you will likely hear about the situation from a user who was present for it. We will discuss best practices in handling these user reports in the next section of this article. There are also a few situations specific to voice channels to be aware of. 

 

Common situations that would require moderator intervention that might occur in voice channels are as follows:

 

  • A user is being rude or disrespectful to a specific user
  • A user is saying discriminatory phrases or slurs in a voice channel
  • A user is playing audio or video of Not Safe For Work content
  • A user is rapidly joining and leaving a voice channel or multiple voice channels (voice hopping)
  • A user is producing audio shock content (loud noises intended to startle or harm a listeners’ ears)
Risk Management for Voice Channels

Before we even consider how we plan to moderate situations that may arise in voice channels, let's discuss ways to prevent them from happening in the first place, or at least make it easier for us to deal with them later. One of the easiest and most useful things you can do is set up voice channel logging. Specifically, you can log when a user joins, leaves or moves between voice channels. Many moderation bots support this type of logging. You can read more about using bots for moderation in DMA article 321. For reference, here's what this kind of voice channel logging might look like:



Having voice logs will allow you to catch voice hoppers without having to be present in a voice channel. It will also prove useful in verifying reports from server members and ensuring users can't make a false report about other users who weren't actually present in a voice channel at the time. Another thing you can do to prevent trolls from infiltrating voice channels (and every other part of your server) is having a good verification gate. You can read more about verification gates here.

Handling Reports from Your Server Members

Now that we know what to look for when moderating voice channels and how to take some preventative measures, it's time to address the elephant in the room. What do we do when a member of the server reports rule breaking behavior in a voice channel, but no moderator was there to witness it? If we believe them and treat their word as fact, we can take care of the situation accordingly.

While this may work for certain situations, there is the possibility that troublesome users may realize that the moderators are acting on all reports in good faith and begin to try to take advantage of this policy and create false reports. This is obviously very problematic, so let's now consider the opposite scenario. If a moderation team doesn't believe any of the reports and moderate situations only when a moderator is present, it's likely that the troublesome user can keep getting away with their rule breaking behavior. In some cases, even if a moderator is available to join a voice channel when they receive a report, they might find that the troublesome user stops their behavior when the moderator joins, thus making it impossible to verify the report. This can be partially mitigated by moderators using alternate accounts to join the voice channel and appear as a user, but ultimately there will be situations where mods aren't available and reports will need to be considered.

 

In general, any singular report should not be believed based on the report alone. When a user reports a situation in a voice channel, the following questions should be asked:

 

  • Has the user made reports in the past? Were they legitimate?

 

Active users who make many legitimate reports can likely be trusted. The more legitimate reports a user has made, the more likely it is that they can be trusted. Even if a trusted user makes a false report at some point, it is often easy to undo any false actions taken.

 

  • How long has the user been a part of the server and how much have they contributed?

 

Positive contributions in your server (such as quality conversations or being welcoming and supportive of other server members) is also a way that members might gain trust. This trust is something that can be handled the same way you handle trust gained from legitimate reports.

 

  • Did multiple users report the situation or can others who were present confirm the report? If so, do these users have any sort of connection to each other? Could they be the same person on multiple accounts?

 

If multiple users report the same issue, and you know they are not connected, the report can safely be trusted  as long as the information in the reports are consistent with each other. Knowing when users are connected can be difficult in some cases, but some signs you can look for are: users who joined the server at the same time, users with IDs close to each other (signifying similar account creation times), similar behavior or patterns of talking, or interactions with each other which signify the users know each other from somewhere else. It is important to ensure that this isn’t an organized effort from a group of friends or alternate accounts targeting another user.

 

  • Does the report seem plausible? Is the person being reported someone you expect would cause trouble?

 

There are many things you can look at when examining the user being reported. Some things to look out for are offensive or inappropriate usernames, profile pictures, or statuses, and any inappropriate messages sent by the user in the past. Inappropriate messaging can be anything from spam to rude or offensive behavior or even odd or confusing behavior.

 

If the answers to these questions leads to more skepticism and questioning of the legitimacy of the report, then it may be the case that the report can't be trusted. If you are not confident in the report's legitimacy, you should still make a note of the report, in case similar reports are made in the future. Repeated reports are one of the most common ways to discover that a report is likely legitimate and allow you to make a better informed decision later on. It may also be the case that the answer to these questions reveals that the report is likely illegitimate, in which case you may punish the reporter(s) accordingly. Intentional false reporting is a breach of trust with a moderation team, but reports made in good faith that are unactionable should not result in punishment.

Handling Severe Situations

Occasionally, you might find yourself faced with a difficult, time sensitive situation where someone is at risk of being harmed, or harming themselves. Because nothing is recorded in voice channels, it is not possible to report these types of situations to Discord. If you witness situations such as these, or if you receive reports of them, you should reach out to those involved in DMs or a text channel in the server. You can also attempt to dissolve the situation in the voice channel if you are confident in your abilities to do so, but it may be harder to get the authorities involved if it is necessary if there is no evidence of the situation in a DM or text channel. Once you have done that, follow the steps in DMA article 104 to report the user(s) to Discord. Whether or not you are able to move the situation to a DM or text channel, call your local authorities if you believe someone is in imminent danger. If you know the area where the person in danger lives, you may also want to call the authorities in their area.

Conclusion

Handling situations in voice channels can be difficult, but with the right tools and protocols in place, your servers’ moderation team can be prepared for anything. After reading this article, you should have a good understanding of when voice moderation is needed, and how to properly enact voice moderation scenarios. This article outlined some of the most common situations that you should look out for, as well as how to prepare for some situations proactively. It also showed how you can handle user reports in a way that minimizes the possibility of actioning false reports. Finally, you learned how to handle severe, time-sensitive situations, where someones’ life may be in danger. There's a lot to consider when you're moderating voice channels, but by following these tips, you should be well equipped to moderate voice channels in your server!

Ready to test your moderator skills? Take the Discord Moderator Exam!