Tang
2 min readMar 21, 2021

--

Discussing about trolling on the internet, online hostility and spread of misinformation, this week’s topic underlined two major concerns related to the same. March (2019) observed that the past comic nature of online trolling had evolved into a far more sinister activity, in turn, even demonstrating antisocial and malicious behaviours which were previously attribute only to online flaming. Additionally, the reading by Rainie et al. (2017) highlighted that rise of online hostility, trolling and misinformation spread can eventually limit the free space on social media platforms where artificial intelligence will be used to police the online space. The author pointed out that while social media platforms are continually getting forced into devising safety measures to prevent online hostility, misinformation, trolling, etc., these same safety devices might hinder the open online exchange of thoughts, while also compromising privacy concerns. Understandably, online hostility and probable options to contain it remain a nuanced topic of discourse especially due to the cost in freedom which might accompany the different safeguards.

While I have not faced distinct online trolling, especially since I am an occasional internet visitor who does not engage in online interactions too often, I have seen friends subjected to the same. I remember a football debate between some friends on Facebook which eventually turned extremely hostile when some supporters of a rival club began trolling with abusive language. Even though this specific incident was especially distasteful due to the aggression and personal but illogical attacks, I have seen many minor but similar incidents where different parties become hostile and abusive with the slightest push. I feel that since it is not in the hands of internet users to prevent an abusive user from trolling, the best option is to report and block them. Interestingly, as discussed by Lanius (2021) in his article published in an academic journal, social media platforms like Twitter have already begun using markers to flag questionable posts and misinformation spread from various bot accounts, in turn, also informing the users of the unreliability of these posts. Similarly, as a user I have also seen Facebook block users who have multiple reports against them. Such active measures can go a long way in reducing the abusive trends among some social media users, essentially making the online space safer for users.

Reference list

Lanius, C., Weber, R., & MacKenzie, W. I. (2021). Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Social Network Analysis and Mining, 11(1), 1–15.

March, E. (2019, February 4). Online trolling used to be funny, but now the term refers to something far more sinister. Retrieved March 18, 2021, from https://theconversation.com/online-trolling-used-to-be-funny-but-now-the-term-refers-to-something-far-more-sinister-110272

Rainie, L., Anderson, J., & Albright, J. (2017, March 29). The future of free speech, trolls, anonymity and fake news online. Retrieved March 18, 2021, from https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/

--

--