A new report from GLAAD has revealed that major social media platforms are failing to protect LGBTQ+ users from hate speech and harassment. The Social Media Safety Index report evaluated platforms such as Facebook, Instagram, TikTok, YouTube, and Twitter on various criteria related to LGBTQ+ inclusivity. Most platforms received failing grades, with TikTok showing a slight improvement due to a recent policy prohibiting targeting users based on sexual orientation or gender identity. Despite having policies to protect LGBTQ+ users on paper, these platforms are allowing harmful rhetoric and misinformation to spread.

X, formerly Twitter, received the lowest rating for its failure to curb anti-LGBTQ+ misinformation on its platform. Influencers like Chaya Raichik, who posts false information about gender-affirming care and equates LGBTQ+ people with “groomers” and “pedophiles,” are contributing to dangerous online rhetoric. This misinformation has led to real-world consequences, such as bomb threats and violence against the LGBTQ+ community. While X has been a major platform for anti-LGBTQ+ sentiment, it generated significantly less revenue than Meta, which has also allowed harmful content to remain on its platforms.

The report highlights instances where legitimate LGBTQ+ content has been targeted or labeled as “sensitive” by social media platforms. For example, an Instagram post from the nonprofit Men Having Babies depicting two gay fathers with their newborn child was flagged as “sensitive content.” This labeling was unwarranted and made the platform less safe and inclusive for LGBTQ+ users. The increased use of artificial intelligence for content moderation has raised concerns about biases in how queer people are depicted and targeted on social media.

Some tech companies have developed “automated gender recognition” technology to predict a person’s gender for targeted advertising purposes. However, privacy advocates warn that these technologies could be used for surveillance in gendered spaces, like bathrooms and locker rooms. While some regions, such as the European Union, have implemented restrictions on AI and regulated social media platforms, the United States has lagged behind. The GLAAD report recommends that platforms strengthen and enforce their current policies to protect LGBTQ+ users and improve content moderation without relying solely on automation.

In conclusion, the GLAAD report sheds light on the failures of major social media platforms in protecting LGBTQ+ users from hate speech and harassment. Despite having policies in place, these platforms are not effectively enforcing them and allowing harmful rhetoric to proliferate. The targeting of legitimate LGBTQ+ content and the use of AI for content moderation raise concerns about biases and surveillance. The report recommends stronger enforcement of policies to protect LGBTQ+ users and improve content moderation practices. It is essential for social media companies to take action to create safer and more inclusive online spaces for all users, including those who are LGBTQ+.

Share.
Exit mobile version