But the question remains: Should Facebook change the way it monitors users for the risk of suicide?
"People need to know that … they can be experienced"
When artificial intelligence tools report a risk of self-injury, these publications go through the same human analysis as the publications reported directly by Facebook users.
If AI or another Facebook user reports a message, the company reviews it. If the job requires immediate action, Facebook can work with first responders, such as police, to send help.
The paper argues that Facebook's suicide prevention efforts should be consistent with the same standards and ethics as clinical research, including requiring the review of external experts and the informed consent of individuals. included in the data collected.
"There is a need for discussion and transparency about innovation in the field of mental health in general.I think technology has great potential to improve suicide prevention, to help mental health in general, but people must be aware that these elements are essential. " happens and, in some ways, they can be experienced, "said Torous.
"We all agree that we want to innovate in suicide prevention, we want new ways to reach people and help them, but we want it to be done in an ethical way, transparent and collaborative, "he said. "I would say that the average Facebook user may not even realize what's going on, so he's not even informed about it."
In 2014, Facebook researchers conducted a study to determine whether the negative or positive content displayed to users resulted in users producing negative or positive messages. This study sparked outrage, with users claiming that they did not even know that it was in the making.
The Facebook researcher who designed the experiment, Adam D.I. Kramer, said in a message that the research was part of an effort to improve service – not to disturb users. Since then, Facebook has made other efforts to improve its service.
"Suicide prevention experts believe that one of the best ways to prevent suicide is for people in distress to hear from friends and family. Facebook is in a unique position to help thanks to the friendships that people have on our platform – we We can connect people in distress with friends and organizations that can offer their support, "said Monday in an email Monday Antigone Davis, security officer for Facebook, in response to questions asked about the new opinion document.
"The experts also agree that it's very important to get people to work as quickly as possible, which is why we're using technology to proactively detect what's going on." We are committed to being more transparent in our efforts to prevent suicide, "she says.
Facebook also noted that the use of technology to proactively detect content in which a person could express suicide ideas does not amount to collecting health data. Technology does not measure the overall risk of suicide for an individual or anything regarding the mental health of a person, he says.
What health experts want technology companies
"This is another area in which private business companies are launching programs to produce good results, but we do not know to what extent they are trustworthy or how they can maintain or are willing to keep the information collected, whether it's Facebook or someone else, "says Caplan, who was not involved in the newspaper.
"This brings us to the general question: do we keep enough regulatory eye on mainstream social media? Even when they are trying to do something good, that does not mean that they are succeeding properly ", did he declare.
"All of these private entities that are not generally considered to be health care entities or institutions are potentially able to have extensive health care information, particularly using machine learning techniques," did he declare. "At the same time, they are almost completely outside the current regulatory system that exists to deal with this type of institution."
"The information they collect – and especially when they are able to use machine learning to make predictions about health care and gain insight into their health care – this information is all protected in the clinical field by factors such as HIPAA for all who receive their health care through what is called a covered entity, "said Magnus.
"But Facebook is not a covered entity and Amazon is not a covered entity – Google is not a covered entity," he said. "Therefore, they do not necessarily have to comply with the confidentiality requirements in place for the way we process health care information."
The only privacy protections often enjoyed by social media users are the agreements specified in the company's policy documents that you sign or click "accept" when setting up your account, said Magnus.
"There's something really weird about putting in place, essentially, a public health screening program through these companies that are both outside the regulatory structures we've talked about and because they are outside of that, their research and the algorithms themselves are completely opaque, "he said.
"The problem is that all this is so secret"
"In theory, I would like if we could take advantage of the kind of data that all these systems collect and use them to better support our patients, that would be great, I do not want it to be a closed book. would like this to be open to outside regulators (…) .I would very much like there to be some form of informed consent, "said Schlozman.
"The problem is that all this is so secret on Facebook's side, and Facebook is a multi-million dollar, for-profit business, the possibility that this data is collected and used for other purposes than apparent charity because it's hard to ignore that, "he said. "We really feel that they are transgressing a lot of pre-established ethical boundaries."