The challenge of killing blue whales

Clicking on the second option brings up another menu that includes a more obvious choice to describe harmful behavior or suicide. Click on that, and the pop-up screen will offer four choices about how to assist publishers. These are powerful resources, including scripts about how to talk to friends who are struggling, and an option to ask Facebook to intervene – but they’re pretty deep buried.

In its March 1 newsletter, Facebook also said it had begun limited US trials of its artificial intelligence-based sample recognition process. It is designed to recognize posts in which the poster has expressed suicidal thoughts and makes reporting options more prominent; it must also refer them to Facebook’s community activity group for review. This can make it easier to report problem posts – but at this point, it’s difficult for reporters to check.

Notice: Social media has been activated Strange kinds of challenges on the internet. The people best positioned to see the impact of memes and the impact it spreads, are those at social media companies. These services collect and analyze streams of data about what minors discuss. They are sophisticated enough in parsing that they can sell it to advertisers. We must ask them to direct that fallacy on public health issues, especially those involving adolescents and self-harm.

To be fair, figuring out how to deal with this type of content is difficult. First, these companies see themselves as platforms, not media companies, and mainly take a coherent approach to controlling content in the name of free speech. Over the past year, Facebook in particular has admitted that it has some responsibility for preventing fake news and keeping illegal and dangerous content off its website. But in general, meddling in searches and flagging posts requires editorial judgment that tech companies sometimes don’t like doing, and are often uncertain of a consistent resolution. Furthermore, it requires time and attention at engineer-run companies, which are more used to technology than humans for surface problems.

Then there is the fact that chasing the related meme becomes a game of fishing. The blue whale is particularly awful, but there’s a constant flurry of memes that encourage people – especially, but not just minors – to harm others or themselves for spectacle purposes. The #saltandice challenge encourages them to endure the burn on their skin when the salt is combined with the stone for as long as possible. The #knockout challenge involves attacking an unsuspecting victim hard enough for her to lose consciousness.

We should not expect platforms to keep us safe from any harmful memes. But we should absolutely ask them to do more – consistently – to ensure that when new threats are identified, they will act in good faith to tailor tools to address them. Public service notices directing seekers to accurate information and sources for mental health interventions should be an industry expectation.

In the case of the Blue Whale Challenge, even as popular American media reports, Tumblr, at least, have found that the data tells a different, more promising story. Search terminology on its platform peaked in May around 60,000. The following month, they fell 68%. Maybe the meme is fading into the haze, where it belongs.

But another meme is sure to follow suit. And when it does, social media companies must be among the first to notice. For the sake of the public, they have to step up.

Leave a Comment