Around the world future

Facebook introduces dystopian concept: a trustworthiness-based rating system

 
The thin line between what could be considered “beneficial to society” and being “dystopian” has gotten even thinner.  Has Mark Zuckerberg thought this one through? To combat the idea of fake news, Facebook has now employed 3,000 additional censors who, together with the 4500 already employed, will decide what to delete and what to keep on the social network. This is what was communicated on a blog post and has definitely raised some eyebrows amid the growing suspicion that Facebook has attempted to crack down on users that do not fall in-line with the accepted political narrative. A number of voices online have seen suspicious activity such as PragerU, amongst others, feeling the breath of Facebook down its neck. Where is the cybersociety headed? 

The reports of such a system have, according to Facebook themselves, been developed throughout the year. Ever since US President Donald Trump popularized the term fake news, it has appeared on the radar of multiple giant social platforms including Twitter, YouTube and Facebook. I remember the reports about the dystopian nature of a Chinese Social Credit Score, which will be fully integrated into Chinese society by 2020. A number of Western media reports were quick to label it as something from an episode of the TV show Black Mirror – but what they don’t understand is that in modern times, large corporate entities and governments intertwine, so for Facebook to introduce a similar system based on trustworthiness is equally worrying. 

The rating system is based on the ability of Facebook users to be able to flag your content for false news, which if it reaches a significant number of flaggers labelling it as such, your “trustworthiness” rating will go down ranging from a measure of 0 to 1. 0 in this case being the least trustworthy, and thus your posts have a lower visibility on people’s feeds, as filtered by Facebook themselves. I should be careful with this information here, as perhaps this information, could be flagged as fake news if enough people considered it as such. Could this mass-flagging be used for malign purposes or for being on the wrong end of the political spectrum? It could be, but Facebook needs to be careful to not fall into this trap by implementing some sort of system where it doesn’t just automatically go down due to third-party decisions. 

But that’s not the only recent news concerning Facebook…

 

Facebook is now taking the hard gloves to clear out what is called “hatred”, as well. Additional thousands of censors will be employed with the task of deleting offending records. This was announced by the company’s vice president of public policy in Europe, the Middle East and Africa, Richard Allan.

“Our current definition of hate speech is something that directly attacks people based on what is known as their protected features,” writes Allan. For example, race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender identity, serious disability or illness are mentioned.

Allan acknowledges that the boundaries can sometimes be difficult. “There is no universally accepted answer for when something crosses the boundary. Even though a number of countries have laws on hate crimes, their definition varies considerably from what is seen as criminal, “. Thus far, the combination of a trustworthiness system and rules against a thinly-lined definition of hate speech could make your experience on Facebook as…a bit invasive, but only if Facebook comes to term with such a misuse – will they be able to take care of it?

The California-based company relies mostly on the two billion Facebook users report posts that they consider to be hateful. Facebook’s employees censor, then review the material and decide if it should be deleted. The human censors have been great in doing so thus far. 

Facebook is now seeking another 3,000 people for its “community operations team” to assist in reviewing records that may be seen to violate company policies. According to Richard Allan, Facebook currently has 4,500 employees tasked with reviewing reported records.

In the blog post, Allan states that Facebook is aware that it may appear censorship when they delete posts that are considered offensive to others and that the company is therefore working to improve their filtering processes and intend to keep transparency about their standards. He adds that Facebook is currently “experimenting” with techniques that can eventually help automatically filter offensive language.

Earlier this month, Facebook similarly informed about its fight against terrorism online through a combination of artificial intelligence and trained experts. The company said they have more than 150 employees dedicated to terrorism alone according to AP. 

 
It’ll be interesting to see if Facebook can tackle this problem without making the line between dystopian measures and benefit to society even thinner. At the same time, people await a new breakthrough social network to appear through the cracks. Kim Dotcom, are you working on something? 
 
Evolvera – always changing, always evolving. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: