Will Instagram’s ban on self-harm images be enough to protect vulnerable users?

Earlier this week, Instagram announced that they were extending the ban of self-harm content to drawing and cartoons.

This follows their pledge in February to remove all graphic self-harm images from the website. This pledge was on the back of Instagram reviewing how safe they have kept their site for the community of vulnerable users.

Only a month before, the suicide of 14-year-old Molly Russell came to light. After Molly took her own life in 2017, her parents found she had viewed self-harm and suicide related content on her Instagram account. This could suggest her death was potentially influenced by viewing this content.

View this post on Instagram

Has Molly’s story changed social media? Find out tonight at 10pm on BBC news and online

A post shared by Molly Rose Foundation (@mollyrosefoundation) on

Poor mental health among young people is incredibly high. In a survey conducted by National Statistics UK in 2018, suicide in young males aged between 10-24 years old had risen to 9 deaths per 100,000 males in 2018. Moreover 3.3 young females per 100,000 lost their lives in 2018.

Also, according to MentalHealth.org, 20% of adolescents may experience a mental health problem in any given year. In further statistics, 50% of mental health problems are established by age 14; with 75% established by age 24.

In July 2019 it was recorded that, 37.2% of Instagram users were aged 13-24 years old. This means there is a large community of Instagram users can be considered vulnerable and at risk.

74634650_2370250839892751_4297266765655703552_n.jpg

Molly has not been the only person potentially influenced by self-harm content on Instagram. In January this year, 16-year-old Libby spoke to the BBC about being ‘hooked’ on ‘viewing’ and ‘posting’ self-harm content on Instagram when she was 12-years-old. She recalled sharing “pictures of her fresh cuts” to an audience of 8,000 followers.

Her dad Ian recalled comments underneath Libby’s posts saying: “You shouldn’t have done it this way, you should have done it like that. Don’t do it here, do it there because there’s more blood.”

It is frightening to think that other Instagram users, who are potentially in a vulnerable position themselves, were encouraging Libby to self-harm and put herself at risk.

What’s more frightening is that when the family attempted to report the posts, Instagram responded that the pictures did not breach the community standards.

“The standard reply of such content not infringing the platforms’ community guidelines is still too often received when complaints are made.”

Ian Russell, the father of Molly, founded the ‘Molly Rose Foundation’ after her death. The foundation’s aim is suicide prevention with a focus on young people under the age of 25. They seek to help those suffering from mental illness by giving advice and connecting them with help.

Ian Russell believes a short-term solution is for social media websites to put more focus in reviewing reported content: “I think it is vital for platforms to respond more effectively to their customers’ requests to remove any harmful content found as the standard reply of such content not infringing the platforms’ community guidelines is still too often received when complaints are made.”

Nine months on, Adam Mosseri, head of Instagram, announced how they will implement the removal of harmful drawings and memes: “We will no longer allow fictional depictions of self-harm or suicide on Instagram, such as drawings or memes or content from films or comics that use graphic imagery.”

“Accounts sharing this type of content will also not be recommended in search or in our discovery surfaces, like ‘Explore’.

However, from searching the Facebook owned app this week, it is clear there is still self-harm related content all over Instagram When searching the #selfharm tag and filtering the search to ‘accounts’ a number of accounts with triggering content pops up.

Whilst the pledge is a step in the right direction, Ian Russell is sceptical whether Instagram will follow through: “I think this is an important step forward and sets a lead that other platforms, who up until now have remained almost silent on this issue, to follow. However, it is very hard, from outside the tech corporations, to judge just how committed to the removal of harmful content Instagram really is.”

The biggest step Instagram has taken so far, is hiding posts that are categorized under the ‘#suicide’ tag. When you search the Instagram tag, it appears there are 8.4 million posts, but the results are concealed and instead, a ‘Get Support’ option appears.

This directs you to Instagram’s ‘Can We Help’ section of the website that gives you the opportunity to talk to someone or access information that other people have found supportive.

IMG_4949.png IMG_4951.png

Instagram claims that they have removed 834,000 pieces of content between April and June, with 77% being unreported by users. However, there are 95 million photos and videos shared on Instagram per day. Therefore, over the three-month period, Instagram only discovered an average of just over 9000 posts with dangerous content per day. This leaves room for millions of self-harm content being undiscovered by Instagram.

73413208_410772836516733_4029206161516396544_n.png

 The only way that Instagram can find the dangerous content is through the posts being reported or the images being tagged with suicide related terms. Therefore, it is likely a lot of harmful content is being missed.

Ian Russell feels that Instagram’s current algorithm will not allow much improvement in protecting vulnerable users: “It is likely that sizeable improvements will only be made if the platforms’ algorithms are adapted to provide better protection and stop the dangerous spread of harmful content, it would be more beneficial if any development in this area is freely shared to ensure as widespread benefit as possible.”

 The risk to the wider body of users, including those who are vulnerable, should be balanced against any benefit this content may bring to other communities.”

Instagram says it will not remove all content relating to suicide due to some being recovery stories which can be a form of support for some users. However, what might count as support to some, might trigger other vulnerable users.

Whilst Ian Russell thinks all users should be considered, he feels that vulnerable users should have the focus to ensure they are protected: “The risk to the wider body of users, including those who are vulnerable, should be balanced against any benefit this content may bring to other communities.”

Ian Russell believes a synergy between tech companies, academics and charities will be the best solution to helping vulnerable users: “I think the tech companies should more openly work together with academics and charities in this field, to ensure as much as possible is being done and it is co-ordinated across the whole industry.”

With Instagram making a conscious effort to protect vulnerable users, there is hope that other social media platforms will follow in form and create a safe community for young people at risk.

Read more from lifestyle here: 

The scariest thing about Halloween is the waste!

Scottish islands among the happiest places to live in the UK

Leave a Reply

%d bloggers like this: