Facebook is compromising users’ privacy with artificial intelligence that identifies those at risk of suicide, according to new research.
The social network has developed an algorithm that spots mental health warning signs in posts – and the comments they generate.
But the initiative has raised concerns about ethics and transparency when using consumer data in this way.
Study lead author Dr John Torous, of Beth Israel Deaconess Medical Centre at Harvard Medical School in the US, said: “Facebook’s suicide prevention efforts leads to the question of whether this falls under the scope of public health.”
If a user is deemed at risk, the company calls local emergency services. This makes it medical research – making it liable to the same rules and principles, say psychiatrists.
Last month tragic 14 year-old Molly Russell’s father Ian blamed Instagram, which is owned by Facebook, for the death of his daughter.
Molly took her own life in November 2017 having been corresponding with people over Instagram on ‘suicide porn’ websites.
Dr Torous said: “The approach Facebook is trialing to reduce death by suicide is innovative and deserves commendation for its ambitious goal of using data science to advance public health, but there remains room for refinement and improvements.”
Facebook has been passing the information along to law enforcement in the US for wellness checks since March 2017.
It followed a string of suicides that were live-streamed on the platform to proactively address a serious problem.
A wave of privacy scandals have already brought Facebook’s data-use into question.
So the idea of it creating and storing mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.
While it is creating new health information about users, it isn’t held to the same privacy standard as healthcare providers.
It accumulates a large amount of personal medical and mental health information in determining whether a person is at risk of suicide.
It also activates the public health system through calling emergency services – and requires equal access and efficacy for all.
Dr Torous said: “The scope of the research seems more fitting for public health departments than to a publicly traded company whose mandate is to return value to shareholders.
“What happens when Google offers such a service based on search history, Amazon on purchase history, and Microsoft on browsing history?
“In an era where integrated mental health care is the goal, how do we prevent
fragmentation by uncoordinated innovation?
“And even if outside the scope of public health, discussions of regulation, longevity, and oversight necessary are still needed for this approach to be equitable and successful.”
Writing in Annals of Internal Medicine the researchers said Facebook has offered some details on its algorithms.
But less is known about the credentials of its Community Operations who review the data – or the outcomes of around 3,500 calls to local emergency services so far.
After confirmation the company contacts those thought to be at risk of self-harm to suggest ways they can seek help. The tool is being tested only in the US at present.
The rsearchers said Facebook does not claim its suicide efforts are research. But it has conducted experiments on users before and denied it was doing so.
Dr Torous said: “While the crisis around suicide requires innovative approaches, abnegation of human subject protections, peer-reviewed science and public health will only hinder progress and the adoption of innovation.”
The algorithm touches nearly every post on Facebook – rating each piece of content on a scale from zero to one.
Data protection laws that govern health information in the US currently don’t apply to the data that is created by Facebook’s suicide prevention algorithm.
By Ben Gelblum and Mark Waghorn