Twitter has announced new updates to its safety features, It will be leveraging its own technology for reducing abusive content, allowing users to have more control over their experience on the microblogging site, and communicating more clearly with them regarding the actions it will take against abusers.
In a blog post, the social network announced users will now be able to mute directly from their home timeline and set a time-limit on how long they would like keywords and phrases or accounts muted for.
The feature enables users to hide certain content and accounts from sight.
Twitter also revealed it will now use its algorithms to more actively identify accounts that spread abusive content.
The company will then limit functionality of those accounts for an unspecified amount of time and even remove them if the abuse doesn’t stop.
Limiting functionality will include only allowing the sanctioned user’s followers to see their Tweets.
At present, accounts are deleted or suspended when marked as abusive.
It is also introducing new filtering options that will allow users to hide notifications from accounts without a profile photo, verified email address or phone number.
The social media site’s engineering chief Ed Ho said: “We aim to only act on accounts when we’re confident, based on our algorithms, that their behaviour is abusive.
“Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them every day.”
Twitter has been frequently criticised in the past for the way it handles reports of abuse on the site and has added several new tools in recent months in an attempt to clamp down on its issues, a process the service says it continues to work on.
Earlier this month the social media company said it would make it harder for abusive users to create new accounts, launch a “safe search” function and begin collapsing tweet replies deemed abusive or low-quality so they are hidden from immediate view.
“We’re continuing to improve the transparency and openness of our reporting process,” Mr Ho said.
“You’ll start to hear more from us about accounts or tweets that you’ve reported to our support teams – both when you report harassment directed at you or another account.
“You will be notified when we’ve received your report and informed if we take further action.
“This will all be visible in your notifications tab on our app.”
Mr Ho also said the site would learn from its “mistakes” and referenced a recent change which Twitter withdrew within hours of implementing after user backlash.
The site had removed notifications for the occasions users were added to a list by someone else, a feature some used to bully others by adding them to lists with offensive names.
Many users quickly complained to Twitter that the change only covered up potential abuse rather than confront it and was subsequently reversed.
“We’re learning a lot as we continue our work to make Twitter safer – not just from the changes we ship but also from the mistakes we make, and of course, from feedback you share,” Mr Ho said.
Twitter, Facebook and other internet companies have faced growing complaints in recent years over how they monitor and police their content, as users and governments have stepped up pressure on Silicon Valley to prevent violent extremist propaganda, curtail harassment and bullying, and limit fake news.
Those efforts have often clashed with free-speech activists who have warned about internet censorship and some political groups that claim they are being unfairly targeted.