Social media giants should be forced to hand over data and pay towards research into their potential harms, a new report backed by the father of Molly Russell argues.
Concerns about the impact of social media on vulnerable people come amid suicides such as that of 14-year-old schoolgirl Molly in 2017, who was found to have viewed harmful content online.
The Royal College of Psychiatrists said a proposed 2% levy on the UK revenues of major tech companies does not go far enough. Instead, it wants the so-called “turnover tax” to apply to international turnover and for some of the money from it to be used for mental health research.
Molly’s father, Ian Russell, spoke of the urgent need for greater action in an emotional foreword to the report, in which he described the “wrecking ball of suicide” that “smashed brutally” into his family, blaming “pushy algorithms”.
He said of her social media accounts: “Among the usual schoolfriends, pop groups and celebrities followed by 14-year-olds, we found bleak depressive material, graphic self-harm content and suicide-encouraging memes. I have no doubt that social media helped kill my daughter.”
Mr Russell also detailed one of Molly’s final notes which described how she felt “with heart-breaking clarity”.
“I’m the weird sister, quiet daughter, depressed friend, lonely classmate,” she wrote. “I’m nothing, I’m worthless, I’m numb, I’m lost, I’m weak, I’m gone. I’m sorry. I’ll see you in a little while. I love you all so much. Have a happy life. Stay strong xxx.”
Algorithms pushed ‘depressive suicidal content at Molly Russell
Molly Russell was a 14-year-old schoolgirl who took her own life in 2017 after viewing harmful images on Instagram, a Facebook-owned social media platform.
She had entered a “dark rabbit hole of depressive suicidal content”, her father said.
Ian Russell holds Instagram partly responsible for his daughter’s death. “I think Molly probably found herself becoming depressed,” he told BBC News last October.
“She was always very self-sufficient and liked to find her own answers. I think she looked towards the internet to give her support and help. She may well have received support and help, but what she also found was a dark, bleak world of content that accelerated her towards more such content.”
Mr Russell claimed the algorithms used by some online platforms “push similar content towards you” based on what you have been previously looking at.
He said: “I think Molly entered that dark rabbit hole of depressive suicidal content. Some were as simple as little cartoons – a black and white pencil drawing of a girl that said ‘Who would love a suicidal girl?’. Some were much more graphic and shocking.”
Instagram said that, between April and June 2019, it removed 834,000 pieces of content, 77% of which had not been reported by users.
But Mr Russell said: “It would be great if they could find a way to take down 10 times the number of posts and really reduce the potentially harmful content that is on their platform.”
Instagram chief executive Adam Mosseri said: “Nothing is more important to me than the safety of the people who use Instagram. We aim to strike the difficult balance between allowing people to share their mental health experiences – which can be important for recovery – while also protecting others from being exposed to potentially harmful content.”
Mr Russell urged parents to speak with their children about what they are viewing online and how they are accessing it.
Psychiatrists call for social networks to hand over data amid suicide concerns
While welcoming the UK Government’s White Paper on online harms, the Royal College of Psychiatrists’ report calls for an independent regulator with powers to be able to establish a protocol for the sharing of data from social media companies with universities for research, such as behavioural data.
It also points to evidence that increased social media use may result in poorer mental health, particularly in girls.
Dr Bernadka Dubicka, chairwoman of the child and adolescent faculty at the Royal College of Psychiatrists and co-author of the report, said: “As a psychiatrist working on the front line, I am seeing more and more children self-harming and attempting suicide as a result of their social media use and online discussions.
“We will never understand the risks and benefits of social media use unless the likes of Twitter, Facebook and Instagram share their data with researchers. Their research will help shine a light on how young people are interacting with social media, not just how much time they spend online.
“Self-regulation is not working. It is time for Government to step up and take decisive action to hold social media companies to account for escalating harmful content to vulnerable children and young people.”
In a joint article for The Daily Telegraph, Mr Russell and Ms Dubicka wrote that: “On social media, Molly found a world, sadly full of similarly struggling people with a marked lack of access to professional help, that grew in importance to her.
“Social media’s pushy algorithms sucked her further into her digital life, and continued to feed harmful content to her. The posts she saw would clearly have normalised, encouraged and escalated her depression; isolated her and persuaded her to keep it all to herself. They convinced her she had no hope.”
Claire Murdoch, NHS national director for mental health, said: “If these tech giants really want to be a force for good, put a premium on their users well-being and take their responsibilities seriously then they should do all that they can to help researchers better understand how they operate and the risks posed – until then they cannot confidently say whether the good outweighs the bad.”
The biggest social network, Facebook, said it is “already taking a number of the steps recommended” in the report.
“We remove harmful content from our platforms and provide support for those who search for it,” a spokesman said. “We are working closely with organisations such as the Samaritans and the Government to develop industry guidelines in this area.”
A Government spokesman said: “We are developing world-leading plans to make the UK a safer place to be online. This includes a duty of care on online companies, overseen by an independent regulator with tough enforcement powers, to hold them to account.
“The regulator will have the power to require transparency reports from companies outlining what they are doing to protect people online. These reports will be published so parents and children can make informed decisions about their internet use.”
READ MORE: Facebook refuses to ban lies from political ads after ‘disinformation election’
NSPCC: More than 200,000 secondary school children may have been groomed online
Candidates receive four times the amount of online abuse than 2017 election