On 31 December 2017, YouTube star Logan Paul published a controversial vlog showing a corpse he discovered whilst visiting Japan's Aokigahara Forest at the base of Mount Fuji - known as 'suicide forest' for its high rates of suicide. The video, which showed Logan laughing and joking after finding a man's body, was viewed nearly 5 million times.

It took ten days before YouTube finally admitted the video should never have been posted. During this period, the footage ranked as a trending video on YouTube's front page, sparked a Change.org petition of 500,000 signatures to remove the content and gathered international condemnation.

Paul since apologised for his video:

However, these incidents which involve inappropriate content being posted online are not isolated to YouTube. In April 2017, a Thai man hanged his baby daughter before taking his own life on the live-streaming feature 'Facebook Live'. The video remained online for 24 hours before Facebook finally removed its content.

With such furore, these cases beg the question: are social media platforms doing enough to self-regulate inappropriate content that is posted in their communities?

Is self-policing possible?

Self-policing Social media sites can be notoriously difficult because of the sheer amount of content that gets uploaded every minute.

Facebook adds half a million new profiles every day; that's 6 new profiles per second.

Sites like YouTube have over 400 hours of film uploaded every minute. The 10,000 human moderators that YouTube employs to prevent inappropriate content like Paul's from going online is only a drop in the ocean.

As well as human moderators, social media sites use artificial intelligence software and algorithms to identify videos that violate community guidelines.

These algorithms are a necessity, doing the job of 180,000 people working 40 hours a week in order to sort through potentially controversial content.

However, from the Logan Paul case it is apparent that algorithms are not watertight enough to replace human judgement. Something which YouTube finally admits.

Fake news

What's more, 2017 was the 'year of fake news'.

Social media giants Facebook and Twitter, who amass 2 billion and 328 million monthly users respectively, have been hit hard by trolls and bot accounts which spread fake news. As a result, these companies have been prompted to hire more human moderators in an attempt to address such problems which have plagued the sites.

In an attempt to try and combat 'fake news', founder and chief executive Mark Zuckerberg recently announced that Facebook would be using surveys to boost 'trustworthy' news.

At best, reactive and at worst, a losing battle

The reactionary approach to self-policing is clear. After the Logan Paul dead body controversy YouTube announced plans to launch an 'Intelligence Desk' which will detect controversial content before it goes viral.

Similarly, Facebook declared a review of its reporting procedures after a spate of murders broadcast on Facebook Live.

Although community guidelines and policies exist for all the main social media sites, this is not sufficient. Traditional media industries are subject to stringent regulations on what they can and cannot do. However, it seems that social media platforms have inadequate policies and are continuously playing catch up to self-regulate against the never-ending tide of content that is posted online every minute.

At best, their approach can be described as reactive and at worst, like a losing battle.

YouTube has since commented that its content should not be regulated in the way similar to traditional media broadcasters. Instead of being content creators, YouTube see themselves as a "platform that distributes content". Therefore it should not come under the strict regulations as traditional broadcasters.