Twitter Says #ShutUp to Online Abuse – You’ll Soon Be Able to Mute Words & Conversations
Twitter plays an important role in community discussion. While struggling to attract buyers, the networking site proved during the entire election season that without it, getting timely information was close to impossible. But, there is no denying that probably no other site has such a strong community of abusers who continuously troll, abuse and harass whoever dares to disagree with them. Twitter is now rolling out new features to combat online abuse to offer better experience to users on the platform.
In its announcement today, Twitter said the company has struggled to keep up with and curb abusive content because “Twitter happens in public and in real-time.” With new privacy features, Twitter is focusing on improved controls, better reporting, and most importantly, enforcement.
Twitter focuses on three areas to combat online abuse and hate speech
Twitter has long offered a mute feature to enable users to mute accounts they don’t want to see tweets from. Now the company is expanding its functionality to words and conversations:
We’re expanding mute to where people need it the most: in notifications. We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days. This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time.
The company is also improving how users report online abuse. Twitter conduct policy prohibits conduct that targets people on the basis of race, gender, identity, age, disability, ethnicity, or sexual orientation. Twitter is now focusing on collective support where users can report this type of conduct to the networking site.
Today we’re giving you a more direct way to report this type of conduct for yourself, or for others, whenever you see it happening. This will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.
Even with improved reporting, it could go back to square one if Twitter is not itself equipped to deal with incoming reports. Over the last half decade, we have seen several cases where social media sites struggle to properly handle with reported content – Napalm Girl on Facebook being one of the latest cases.
Twitter has now retrained its support staff to better handle reports on the site, along with improving the tools it uses in-house.
We’ve retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program. We’ve also improved our internal tools and systems in order to deal more effectively with this conduct when it’s reported to us. Our goal is a faster and more transparent process.
The new anti-abuse features will roll out to Twitter apps in the coming days. The latest election cycle was just another reminder how ugly an entire platform could get with fake accounts that are created to hurl abuse, promote hate speech, and curb important voices. With Google using Artificial Intelligence and Twitter improving its tools, we can only hope to finally see an end to this ugly face of the internet.