YouTube’s New Harassment Policy Now Bans Veiled Threats and Toxic Comments

Dec 11, 2019
Submit

Whether you like it or not, platforms such as YouTube and Twitter are now one of the primary means of public discourse. It is, however, fundamentally different as it takes place under a veil of anonymity. The ability to say things without fear of consequence is a powerful one and has been used to achieve great things in the past. Now, people just use it to call each other nasty things online.

Just like every platform that allows its users to comment under a pseudonym, YouTube has had more than its share of controversies. The platform has been used multiple times to doxx, smear and spread misinformation. YouTube has since enacted several policies that aim to discourage such behavior, to little avail. Those policies just got a lot stricter, as described by YouTube’s Global Head of Trust & Safety.

YouTube Has Lessened ‘Simulated Violent Content’ Restrictions for Videogames Videos

Harassment hurts our community by making people less inclined to share their opinions and engage with each other. We heard this time and again from creators, including those who met with us during the development of this policy update.

There's a long blog post detailing all of the new changes that will be in effect starting immediately. You can read it in its entirety here. It is also worth noting that these changes will affect not only YouTube videos, but also the comments that spawn below them. Let's take a look at some of the new policies.

YouTube's rules already forbade any direct threat of violence and harassment based on protected characteristics. The new policies now also cover 'veiled or implied threats' that 'includes content simulating violence toward an individual or language suggesting physical violence may occur.' It also prohibits 'demeaning language that goes too far.'

People who follow YouTube drama will be able to identify the origin of this rule rather easily. Now, I'm not going to take any names, just that one of the parties involved rhymes with powder. What constitutes an implied threat or language goes too far is anybody's guess. Chances are, even the folks over at YouTube don't know.

The second change deals with 'repeat offenders,' aka channels that have repeatedly violated the company's policies. Such parties will now be suspended from the Youtube Partner Program, disallowing them from making any money on the platform. In some instances, YouTube reserves the right to issue strikes and even delete entire channels at its discretion.

The final change is arguably the most controversial of the lot. It sheds light on how YouTube will deal with 'toxic comments.' A lot of YouTube comments aren't exactly civil and often result in flame wars that can span for weeks, and even months, sometimes. To quote YouTube:

YouTube Now Supports HDR Video Playback on iPhone 11 Pro, 11 Pro Max

At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a chilling effect on the entire conversation.

YouTube will now allow creators to 'hold potentially inappropriate comments for review' and says that the feature will be enabled on most accounts by the year-end. Once again, there is no mention of what a 'potentially inappropriate comment' is and at this point, it could be anything. Take a look at YouTube's official Twitter thread about the changes below. 

Either way, it is still a lose-lose situation for YouTube. The platform will perpetually be bombarded by both sides of the aisle; one decrying the death of free speech and the other lamenting its inability to counter extreme content. At the end of the day, it is the creators that get the short end of the stick. They'll be the ones arbitrarily losing revenue and subscribers due to ambiguous policies. Will these changes make YouTube a safer platform? It's hard to tell. As the old saying goes, "the internet always finds a way."

Submit