Social media bosses could be liable for harmful content

What kind of constraints would be legally enforeceable and workable? In your view.
That's the kind of question that would keep a lawyer employed for a good few years... none of us have a bat's chance in hell of giving a meaningful answer.

Might help if you could define "harmful" first tho.
 
What kind of constraints would be legally enforeceable and workable? In your view.

I believe content that is illegal should be removed and the operators of social media should be obligated to do this.

I do not believe that 'harmful' content should be covered under these regulations.

As an example, someone may not approve of homosexuality. They may state that they find it disgusting or wrong or abhorrent. I do not agree with them. I do think they should be freely allowed to hold and state those opinions however.
If they then state that homosexuals should be harmed that has crossed a line in to illegality. That's the point at which that content should be removed.

As for enforcement...community support helps. If something has been flagged then perhaps x number of days should be allowed to review it.

Edit, apologies if you miss this. As a further example of why I don't think 'harmful' works due to individual subjectivity.
Recently on here we had a poster say that a child should be tortured and killed. That post was seen as being acceptable. We had another poster use a term which the dictionary defines as not being offensive and yet the posts referencing it were deleted.
I think that's completely back to front.
 
Last edited:
I believe content that is illegal should be removed and the operators of social media should be obligated to do this.
How would you achieve the removal of illegal content?

Just to give a trivial example, I don't believe an algorithm could identify child porn - ie differentiate between a 15 year old and an 18 year old.

So are we talking volunteer mod army ala Wikipedia?

Or automatically removing content flagged as illegal by other users?

Or hiring thousands of staff to surf for the bad stuff?

How do you determine if the effort the company is making is "good enough" - that they are trying hard enough and should not be punished, when (inevitably) they aren't able to remove 100% of all illegal content?
 
Imo they should be treated like publishers. This free pass they were given gives them zero incentive to police the content on their sites.
In which case the amount of content on social media sites will literally drop off a cliff.

Publishers employ people to proof read, edit, re-draft... all their publications.
 
I believe content that is illegal should be removed and the operators of social media should be obligated to do this.

I do not believe that 'harmful' content should be covered under these regulations.

As an example, someone may not approve of homosexuality. They may state that they find it disgusting or wrong or abhorrent. I do not agree with them. I do think they should be freely allowed to hold and state those opinions however.
If they then state that homosexuals should be harmed that has crossed a line in to illegality. That's the point at which that content should be removed.

As for enforcement...community support helps. If something has been flagged then perhaps x number of days should be allowed to review it.

Edit, apologies if you miss this. As a further example of why I don't think 'harmful' works due to individual subjectivity.
Recently on here we had a poster say that a child should be tortured and killed. That post was seen as being acceptable. We had another poster use a term which the dictionary defines as not being offensive and yet the posts referencing it were deleted.
I think that's completely back to front.
Well, at least that is clear :)

I don't like your examples by way of exemption, I think they're subjective and are quite telling. But I get it, again.

Dis said:
]As for enforcement...community support helps.

What do you mean by this?
 
In which case the amount of content on social media sites will literally drop off a cliff.

Publishers employ people to proof read, edit, re-draft... all their publications.

They will find a way. At the moment they have zero responsibility for content. When/if that changes they either find a way or perish. I don’t buy that FB can’t afford to police itself. I also believe that none of these companies should be able to use and sell our metadata either. Yeah their profits will take a hit but currently they can do as they please with our data and it isn’t right. Tbh if FB dissapeared I wouldn’t shed a tear.
 
But who defines what 'harmful content' is? Illegal content fair enough. But something doesn't become 'harmful' simply because a few idiots on the internet take umbrage with it. Let's not forget, in this country we have the police chasing people down purely because they retweeted something that a few cretins didn't like. It wasn't illegal, it wasn't a hate crime, it wasn't even them that originally posted it yet the police thought it necessary to track them down because they wanted to 'check his thinking'.

The definition of "harmful content" is made by whoever has the power to do so and the definition is whatever ideas they disagree with and/or think they can gain power by suppressing. The same thing that controlling speech is always for and, as you point out, we're already well on the road to thoughtcrime legislation.
 
Well, at least that is clear :)

I don't like your examples by way of exemption, I think they're subjective and are quite telling. But I get it, again.



What do you mean by this?

What don't you like/feel is telling about the examples?

By community support I mean the users of the site actively flagging illegal content. There's no way you could employ sufficient people however an active community will do that job for you.
 
How many thousands of staff to review every post that gets flagged on Facebook?

Facebook already review flagged content and remove harmful and illegal posts, so for the likes of Facebook there is liekly only a moderate change increase personnel.
Probably also helps to have a system where when people flag content if the content is actually removed then they have an increased weighting to help expedite more likely content.

The websites that will have the bigger issues will be the smaller sites but I expect the regulations will not be absolute. Illegal content could be there for a short time period before removal, the companbies given various stages of warnings.

Court cases will be reserved for the blatant offenders who purposely try to spread hate and illegal content. The likes of Gab would liekly get shut down quickly as a it is a cess pit for the lowest dregs of society to freely spew illegal garbage and incite racial or homophobic hatred
 
Facebook already review flagged content and remove harmful and illegal posts, so for the likes of Facebook there is liekly only a moderate change increase personnel.
Probably also helps to have a system where when people flag content if the content is actually removed then they have an increased weighting to help expedite more likely content.
However, I could find no data on number of people banned folowing NewZealand, for re-distributing content (noted this morning BBC's using it's name)
they had an opportunity to disclose more about their review/banning system efficiency, but they have not taken it,
maybe ..... , they are currently being considerate to the situation,.

A tiered membership where (passport/NI) authenticated users (who can be identified and banned by FB/Google) are trusted more, versus unauthenticated where, all posts are delayed/reviewed might work;
let alone, just making it subscription(purchased token) based, to finance the content review.
 
Owners of physical spaces that are open to the public have a degree of responsibilty over what happens in those spaces. I don't see why virtual spaces are any different in that regard.

"We're just a social conduit" wouldn't work for a pub landlord if punters were openly supporting terrorism, for example.

No they dont if you walk into say a cinema and punch somone in the face and or scream obscenities at them the cinema are not responsible.

The cinema is responsible only to take reasonable steps to ensure that the enviroment itself is safe (I. E by fixing or blocking a damaged seat with sharp edges).

This is all typical of the stupid fools that run govement anyway.

They can't stop hardcore child pornography being trivially available on the Internet so they have no chance of actually being able to significantly reduce 'hate speech' and or 'fake news' spreading short of implementing the most draconian of measures (which would have course inevitably be misused)

I think there's a wider problem of an out of touch political class who just haven't realised that the reason a significant tranche of the proles don't agree with them (or are in various stages of outright revolt - see the 'yellow vests) is due to their own failures.

Ironically we have a class of peple who are quick to scream racism at others when outsiders are blamed by some proles for their misfortune who have become obsessed with blaming poltical dissent or unfavorable political outcomes on' outside interference'.

Foreign counties have always meddled in other nations affairs and politics to a degree but we are subject to a narrative now that it was some consiracy of the Russians and other external agents that were principally responsible the Brexit vote and the 2016 US election result.

The narrative ignores the long seated discontent with issues fron the domestic population that may have lead to votes preffering to make risible claims that don't stack up well when the evidence is considered.
 
What kind of constraints would be legally enforeceable and workable? In your view.

Your headline said 'harmful content'

The standard of constraint that's is enforceable and workable is that which is 'illegal'


I. E things that a democratic parliament have had time to consider and vote on and agree should be illegal not some vague nebulous and changing idea of 'harmful'
 
Back
Top Bottom