Twitter has launched a new update that encourages people to think twice before posting a scathing message.
According to a Twitter announcement Wednesday, the new prompt identifies “potentially harmful or offensive” replies to a tweet before an individual hits send, the same day it rolled out the new functionality through iOS and Android profiles, beginning with accounts that have supported English-language settings.
Last year, Twitter checked the prompt and discovered that 34% of users updated their original “potentially offensive replies” or chose not to send them at all, and they were less likely to get offensive replies in return. Furthermore, after one prompt, people wrote 11% fewer negative responses in the future.
After the research last year, the social media firm also incorporated tweaks for “nuance,” such as sarcasm, such as taking into account the interaction between the sender of the text and the responder, depending on how often they communicate, which could mean a deeper interpretation of the tone of communication.
According to the company’s announcement, other changes included enhancements to the technology to help accommodate for conditions where vocabulary could be reclaimed from underrepresented minorities and used in non-harmful ways, as well as updates to more effectively identify strong language, such as profanity.
The company has mentioned that it has made it easy for users to let them know if the prompt was helpful.
A report by Amnesty International and Element AI, a global artificial intelligence software product company, on tweets 778 female journalists and politicians from Britain and the United States received throughout 2017, found 1.1 million “problematic” or “abusive” tweets sent to them, nearly one every 30 seconds, on average. Women of color were 34% more likely to be mentioned in such tweets, and Black women were 84% more likely than White women to be mentioned in abusive or problematic tweets.