How Should Social Media Handle Reported Suicidal Posts?

And is reporting a post helpful?

Ashley L. Peterson
Invisible Illness
Published in
4 min readFeb 20, 2020

--

Image by Gerd Altmann from Pixabay

Let’s say you’re scrolling through your Twitter feed. You see a tweet that makes it seem like that person intends to imminently act on suicidal thoughts. What do you do?

Unfortunately, there isn’t really a good answer.

I’ve been on the reporting side on one occasion. Someone had posted on their blog, which was shared on Twitter, that they had overdosed on pills. I saw the blog post first and thought crap, what can I do? So I headed over to Twitter to see if I could find more information.

Luckily this person’s town of residence was on their Twitter profile, so I didn’t have to try reporting to Twitter. Instead, I called the cops in this person’s town, at which point I found out another Twitter user had already done the same thing and police were already with the blogger.

That’s probably the ideal way for it to go down in terms of being able to effectively report, but it isn’t quite that simple in most cases.

Twitter

I fairly regularly see tweets from people complaining that someone has reported their tweets to Twitter because of content related to suicide. They’re annoyed, and the people commenting share their frustration. Twitter’s response is to send an email with a list of services the person could access, and in some cases will lock the suicidal person’s account for a period of time.

It seems pretty clear from what I’ve seen that reporting to Twitter doesn’t help at all, and more often than not makes things worse.

Twitter has this to say about how they approach suicide threats:

“After we assess a report of self-harm or suicide, Twitter will contact the reported user and let him or her know that someone who cares about them identified that they might be at risk. We will provide the reported user with available online and hotline resources and encourage them to seek help.”

They have a reporting form for threats of suicide or self-harm. When you’re reporting, you’re required to provide your name and email address, but not your Twitter handle.

Twitter also has a list of mental health partners around the world, which is probably what they provide to people whose Tweets have been reported. My own country, Canada, is partner-free — so would Twitter just tell Canadians they’re shit outta luck?

Then you’ve got their policy on glorifying self-harm and suicide. Here’s some of what they’ve got to say:

“While we want people to feel safe sharing their thoughts and feelings, we draw the line at encouraging or promoting self-harm and suicidal behavior, which can pose safety risks for others. With that in mind, we apply a two-pronged approach to the issue: supporting people who are undergoing experiences with self-harm or suicidal thoughts, but prohibiting the promotion or encouragement of self-harming behaviors.”

The first time someone is deemed to violate this policy, they will need to remove the tweet and their account will temporarily be locked. Repeat violators may have their accounts suspended. It sounds like the bar is set very low for something to be deemed as glorifying suicide, and Twitter leans more heavily on the prohibition side than the support side.

Facebook & Instagram

I’m less familiar with how social media platforms other than Twitter respond, but Facebook has a form to report suicidal content and a page with recommendations on how to help someone who’s suicidal.

On a page about self-injury, Instagram says:

“You may have seen a post on Instagram that worries you. If so, you can let us know about it by reporting the post and we may send some resources that we’ve developed with suicide prevention experts to the person. They won’t know that you reported their post. In some cases, we may contact emergency services if they seem to be in immediate danger.”

On the resources page that’s hyperlinked in the above quote, they suggest talking to a friend, talking to a helpline volunteer, or doing some self-care, like drinking a glass of water. When I clicked on the helpline volunteer link, it sent me over to Facebook, where I was given contact info for Canadian helplines for people under 20 and also for First Nations and Inuit people. Nada for middle-aged white folk like me.

Is there a better way?

So, they are making some effort, but is it just a matter of tokenism? Perhaps.

If a platform’s actions are actually making things worse for people who are experiencing thoughts of suicide, that’s not good.

You’d hope there would be some effective reporting mechanism available when someone truly is at imminent risk. But then again, the reliability of potential reporters isn’t always going to be that great.

Emailing someone a list of resources is something I would consider tokenism. If people want to know about suicide crisis resources, they’re quite capable of using Google to find them. A form letter from Twitter is probably just going to feel insulting. Locking people’s accounts is even more problematic, in that it silences people at a time when it’s important for them to be talking. There are a lot of people who rely on their social media networks as an important form of mental health support, and cutting them off from that is not helpful.

Is there a better way? I suppose it depends on the volume of reports that they get. But what if they had a team of people who had proper mental health training who could call emergency services if that was needed and sufficient information was available, or reach out by direct message to see if the person needed help connecting to resources.

Until there is a better way, if you see someone posting on social media about feeling suicidal, think twice before reporting them to the social media platform.

Originally published at https://mentalhealthathome.org on February 20, 2020.

--

--

Ashley L. Peterson
Invisible Illness

Author of 4 books — latest is A Brief History of Stigma | Mental health blogger | Former MH nurse | Living with depression | mentalhealthathome.org