• info@tellmasa.org
  • 0123456789

Free Speech and Social Media

Free Speech and Social Media

The internet hosts many types of media and content. Hate speech can be found in a variety of platforms and online forums. Generally, platforms known for more permissive terms of service or those who do not engage in active content moderation will likely have more hate speech.

Although generally prohibited, it’s not uncommon to encounter hate speech on mainstream social media platforms like Twitter and Facebook. Although these companies prohibit and strive to take-down hate speech, they are not always able to do so. Sometimes this is because the content hasn’t been reported or picked up by their software designed to find it. Sometimes it’s because the platform’s reporting channels are backed-up and content moderators haven’t gotten to it yet. Sometimes questionable content is part of a private or group discussion that is off-limits even to the company’s moderators. Sometimes content reported as hate speech doesn’t fall into the company’s definition of content they prohibit and sometimes moderators make what amounts to a bad judgement because they don’t fully understand or follow the company’s policies.

The majority of public critique about hate speech falls on mainstream popular platforms, but many young people are moving to newer online spaces, especially gaming-related, live streaming, and image sharing platforms. Hate speech can also be found in chat rooms or message board-style forums, including ones known for so-called “controversial speech” and more mainstream sites where it can slip in during live gaming sessions or chats or forums that are not constantly moderated.

Hate speech can be found in videos, cartoons, drawing, even photos. Image and videobased platforms can also contain hateful content. These have a range of moderation, from users enforcing and creating the rules to no moderation. The boards contain usergenerated content, and in some forums, any type of content is allowed. Message boards are the birthplace of many memes and internet hoaxes, which can often include hateful speech.

Hate speech also occur in image and video sharing platforms, some of which are extremely well-known and popular. The bigger ones and even some of the smaller ones are moderated forums, but they still have challenges with content moderation, due to a lack of context presented with the images. The platforms contain a mix of user-generated content and advertisements. Youth and internet influencers are often found here, as opposed to Facebook and Twitter.

Finally, hate speech flourishes on fringe platforms. New platforms are developed all the time, and sometimes existing platforms shut down. Many of these fringe platforms were developed in response to content moderation and concerns over “censorship” on mainstream platforms. Generally, any type of content is permitted, and many users are part of fringe groups or extremist audiences that produce and consume hate speech.

These forums can be operated in the U.S. or other countries and, even if the content is illegal, it may be difficult to compel the platform to remove it. Why is hate speech found on such platforms? Every minute, there are millions of posts created and shared on social media. The scope and scale of online content is so immense, that human moderators cannot enforce the platforms’ terms of service manually. Artificial intelligence-based systems are still new, and lack the understanding of context to determine what is hate speech and what is permissible political critique, artistic expression, or unpopular opinion. Even with the best terms of service, both human
moderators and artificial intelligence-based systems are subject to mistakes and misinterpretations.

Additionally, as previously noted, the law does not force companies to moderate content, but it also doesn’t prevent them from doing so.
The First Amendment allows most hate speech, apart from incitement to violence and other limited categories such as child pornography, and protects from government interference in speech. It doesn’t govern private companies, including platforms who create online spaces. But it’s important to emphasize that the First Amendment applies to government, not private companies, which have a legal right to determine what is and isn’t acceptable on their platforms.
But even if platforms do not have a legal responsibility, they may still have a moral obligation to serve the best interests of their users, who overwhelmingly do not want forums full of hateful content. Platforms, if they wish to serve diverse populations, have a
social obligation and a business model that should value inclusivity. But they sometimes fail at this. Parents should understand the policies of the platforms that their children use, in order to understand the type of content that you’ll find on each. Even companies with strict anti-hate speech policies face challenges in knowing exactly where to draw the line so as to protect diversity of viewpoints while combating hate speech. Additionally, parents should look at the platforms their children want to use, so that they are familiar with the type of content hosted by the platform. Parents can then have informed conversations with their children about what they see and how they can respond.

Leave a Reply

Your email address will not be published.