But the question remains: Should Facebook change the way it monitors users for the risk of suicide?

"People need to know that … they can be experienced"

In 2011, Facebook is associated with the national suicide prevention network Lifeline initiate suicide prevention efforts, including Allow users to report suicidal content they can see posted by a friend on Facebook. The person who posted the content would receive an email from Facebook inviting them to call the National Life Line for Suicide Prevention or to speak with a crisis worker.
In 2017, Facebook expanded those suicide prevention efforts include artificial intelligence can identify Facebook Live posts, videos, and feeds containing suicidal ideas or content. That year, the national suicide prevention channel Lifeline said they were proud to partner with Facebook and that the social media company's innovations are making it easier for people to search for and get support. .
"It's important that community members, whether online or offline, do not feel helpless in the face of dangerous behavior," said John Draper, director of the National Suicide Prevention Chain, said in a press release in 2017. "Facebook's approach is unique: their tools allow members of their community to actively take care of their problems, provide support and report their concerns when necessary."
Websites become tools to end veterans' suicides

When artificial intelligence tools report a risk of self-injury, these publications go through the same human analysis as the publications reported directly by Facebook users.

The decision to use AI was part of an effort to further support users at risk. The company was criticized for its Facebook Live feature, with which some users had streamed graphic events comprising suicide.
In one blog article, Facebook explains in detail how AI looks for patterns in messages or comments that may contain references to suicide or self harm. According to Facebook, comments like "How are you?" and "Can I help?" can be an indicator of suicidal thoughts.

If AI or another Facebook user reports a message, the company reviews it. If the job requires immediate action, Facebook can work with first responders, such as police, to send help.

Yet an opinion piece published Monday in the journal Annals of Internal Medicine claims that Facebook lacks transparency and ethics in its efforts to filter users' messages, identify those who appear to be at risk of suicide and alert emergency services of this risk.

The paper argues that Facebook's suicide prevention efforts should be consistent with the same standards and ethics as clinical research, including requiring the review of external experts and the informed consent of individuals. included in the data collected.

Dr. John Torous, Director of digital psychiatry division at the Department of Psychiatry at the Beth Israel Deaconess Medical Center in Boston, and Ian Barnett, Assistant Professor of Biostatistics at the University of Pennsylvania Perelman School of Medicine, co-author of the new document.

"There is a need for discussion and transparency about innovation in the field of mental health in general.I think technology has great potential to improve suicide prevention, to help mental health in general, but people must be aware that these elements are essential. " happens and, in some ways, they can be experienced, "said Torous.

"We all agree that we want to innovate in suicide prevention, we want new ways to reach people and help them, but we want it to be done in an ethical way, transparent and collaborative, "he said. "I would say that the average Facebook user may not even realize what's going on, so he's not even informed about it."

I
In 2014, Facebook researchers conducted a study to determine whether the negative or positive content displayed to users resulted in users producing negative or positive messages. This study sparked outrage, with users claiming that they did not even know that it was in the making.

The Facebook researcher who designed the experiment, Adam D.I. Kramer, said in a message that the research was part of an effort to improve service – not to disturb users. Since then, Facebook has made other efforts to improve its service.

Last week, the company announced that it had been work in partnership with experts to protect users of self-injury and suicide. The announcement was made after news around death by suicide of a girl UK; her Instagram account would have content painful content about suicide. Facebook is the owner of Instagram.

"Suicide prevention experts believe that one of the best ways to prevent suicide is for people in distress to hear from friends and family. Facebook is in a unique position to help thanks to the friendships that people have on our platform – we We can connect people in distress with friends and organizations that can offer their support, "said Monday in an email Monday Antigone Davis, security officer for Facebook, in response to questions asked about the new opinion document.

"The experts also agree that it's very important to get people to work as quickly as possible, which is why we're using technology to proactively detect what's going on." We are committed to being more transparent in our efforts to prevent suicide, "she says.

Facebook also noted that the use of technology to proactively detect content in which a person could express suicide ideas does not amount to collecting health data. Technology does not measure the overall risk of suicide for an individual or anything regarding the mental health of a person, he says.

What health experts want technology companies

Arthur Caplan, Professor and Founding Director of the Bioethics Division of NYU Langone Health In New York, Facebook praised Facebook for wanting to contribute to suicide prevention, but said the new opinion piece was accurate: Facebook needed to take further steps to improve privacy and ethics.

"This is another area in which private business companies are launching programs to produce good results, but we do not know to what extent they are trustworthy or how they can maintain or are willing to keep the information collected, whether it's Facebook or someone else, "says Caplan, who was not involved in the newspaper.

"This brings us to the general question: do we keep enough regulatory eye on mainstream social media? Even when they are trying to do something good, that does not mean that they are succeeding properly ", did he declare.

How Facebook & # 39; like & # 39; predict race, religion and sexual orientation
Many technology companies – including Amazon and Google – are likely to have access to large health data or will likely have a future, said David Magnus, professor of medicine and biomedical ethics at Stanford University who was not involved in the new opinion paper.

"All of these private entities that are not generally considered to be health care entities or institutions are potentially able to have extensive health care information, particularly using machine learning techniques," did he declare. "At the same time, they are almost completely outside the current regulatory system that exists to deal with this type of institution."

For example, Magnus noted that most technology companies do not fall under the "common rule" or Federal Policy on the Protection of Human Subjects, which governs research on humans.
13 simple ways to protect your family's data

"The information they collect – and especially when they are able to use machine learning to make predictions about health care and gain insight into their health care – this information is all protected in the clinical field by factors such as HIPAA for all who receive their health care through what is called a covered entity, "said Magnus.

"But Facebook is not a covered entity and Amazon is not a covered entity – Google is not a covered entity," he said. "Therefore, they do not necessarily have to comply with the confidentiality requirements in place for the way we process health care information."

HIPAA, or Portability and Liability Insurance Act, requires the security and confidential treatment of a person's protected health information and deals with the disclosure of that information where appropriate.

The only privacy protections often enjoyed by social media users are the agreements specified in the company's policy documents that you sign or click "accept" when setting up your account, said Magnus.

"There's something really weird about putting in place, essentially, a public health screening program through these companies that are both outside the regulatory structures we've talked about and because they are outside of that, their research and the algorithms themselves are completely opaque, "he said.

"The problem is that all this is so secret"

Dr. Steven Schlozman, co-director of the Clay Center for Healthy Young Minds, said Facebook's efforts to prevent suicide are not subject to the same ethical standards as medical research. Massachusetts General Hospital, who was not involved in the new opinion paper.

"In theory, I would like if we could take advantage of the kind of data that all these systems collect and use them to better support our patients, that would be great, I do not want it to be a closed book. would like this to be open to outside regulators (…) .I would very much like there to be some form of informed consent, "said Schlozman.

"The problem is that all this is so secret on Facebook's side, and Facebook is a multi-million dollar, for-profit business, the possibility that this data is collected and used for other purposes than apparent charity because it's hard to ignore that, "he said. "We really feel that they are transgressing a lot of pre-established ethical boundaries."