Social Media Tries To Be Socially Responsible: A Look At Instagram’s Self-Harm Ban – Q+M

Social Media Tries To Be Socially Responsible: A Look At Instagram’s Self-Harm Ban

Sometimes, social media gets it right. On the one hand, Facebook and Mark Zuckerberg have effectively punted on permitting false political ads on its site, in addition to a decade’s worth of privacy issues. But Facebook’s little brother is at least trying to better serve its main user group and the self-harm crisis. 

Social media has changed the way we live, but for the youngest generations, they have never known a world without smartphones and notifications. The constant connectivity of these platforms means that young people can easily stay in touch with friends once the school bell rings, but that also means problems like bullies, teasing, and anxiety follow them home, too. In effect, the confusing, challenging experience of navigating the social environment of elementary, middle, and high school never turns off, or at least only shuts off when the phone does. 

The impact has been startling, but perhaps no social outlet has had as negative of an influence as Instagram. A 2017 study found that Instagram had the biggest potential to cause anxiety, depression, fear of missing out, isolation, and self-harm. Instagram was only slightly worse than SnapChat, another platform used primarily by teens, which means that the two most harmful social media outlets may be those chosen by the most vulnerable of users. 

In that same year, the death of Molly Russell caused outrage and sparked a concerted effort to drive media on Instagram and other social sites. Russell, aged just 14 at the time, committed suicide after viewing videos and content of self-harm. Her father believes that Instagram is at least somewhat responsible for his daughter’s death. 

To its credit, Instagram has been pro-active in the fight to identify images and videos that depict self-harm. In February of 2019, they announced a ban on self-harm images and began using automated systems that can detect and remove content that it deems to promote self-harm or suicide. This screening removed nearly 1 million posts and reduced this type of content by approximately 77%, according to a statement released by Instagram CEO Adam Mosseri. Now, improvements to software mean Instagram will be able to identify and remove even more types of these posts, including drawings, memes, and videos. 

Instagram is also taking an additional step to lower the visibility of profiles that propagate banned content. Instagram confirmed that they will be able to hide these accounts from showing up in the Explore feature of the app, reducing the chances of seeing these users add Followers and Reach. It’s a big step forward for a platform, and indeed an entire industry, that has been heavily criticized for its impact on mental health.