Instagram is introducing new technology to its app in the UK and Europe that is able to better identify suicide and self-harm content which breaks the app’s rules.
The new moderation tools are able to more proactively spot self-harm content and automatically make it less visible in the app, and in some cases remove it completely after 24 hours if the machine learning is confident it breaks the site’s rules.
The feature is already used on Facebook and Instagram outside of the EU, where it includes additional layers which also see posts referred to human reviewers once spotted, who can then take further action such as connecting the poster to local help organisations and in the most severe cases, calling emergency services.
However, Instagram confirmed these referral aspects are not yet ready to be introduced to the UK and Europe because of data privacy considerations linked to the General Data Protection Regulation (GDPR).
The social media giant said it hoped it would be able to introduce the full set of tools in the future.
Instagram’s public policy director in Europe, Tara Hopkins, said: “In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community.”
She said that because in a small number of cases an assessment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a “mental health assessment” and therefore a part of special category data, which receives greater protection under GDPR.
Ms Hopkins said the company was in discussions with the Irish Data Protection Commission (IDPC) – Facebook’s lead regulator in the EU – and others over the tools and a potential introduction in the future.
“There are ongoing conversations that have been very constructive and there’s a huge amount of sympathy for what we’re trying to achieve and that balancing act of privacy and the safety of our users,” she said.
In a blog post announcing the update, Instagram boss Adam Mosseri said it was an “important step” but that the company want to do “a lot more”.
He said not having the full capabilities in place in the EU meant it was “harder for us to remove more harmful content, and connect people to local organisations and emergency services”.
He added that the firm was in discussions with regulators and governments about “how best to bring this technology to the EU, while recognising their privacy considerations”.
Facebook and Instagram are among the social media platforms to come under scrutiny for their approach to and handling of suicide and self-harm material.
Concerns have been raised about self-harm and suicide content online, particularly how platforms handle such content and its impact on vulnerable users, especially young people.
And fears about the impact of social media on vulnerable people have also increased amid cases such as that of 14-year-old schoolgirl Molly Russell, who took her own life in 2017 and was found to have viewed harmful content online.
Molly’s father, Ian, who now campaigns for online safety, has previously said the “pushy algorithms” of social media “helped kill my daughter”.
In September, Facebook and its family of apps were among the companies to agree to guidelines published by Samaritans in an effort to set industry standards on how to handle the issue.
Ms Hopkins said Instagram was trying to balance its policies on self-harm content by also “allowing space for admission” by people who have considered self-harm.
“It’s okay to admit that and we want there to be a space on Instagram and Facebook for that admission,” she said.
“We’re told by experts that can help to destigmatise issues around suicide. It’s a balancing act and we’re trying to get to the right spot where we’re able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they’re vulnerable.”