Censorship: The Good, The Bad and The Online

We live in a world where more is more. The more digital technology and media platforms on offer, the better. While this development has led to improvements in almost every field imaginable, it has arguably come at a cost.

In the current digital age, you would expect free speech to flourish. After all, a rapid increase in information flows should bring with it freedom of expression, creativity, and opinions. Instead, it is becoming obvious that the online domain is not free from the censorship experienced decades, and centuries, before (Balkin 2009, p. 427).

socrates.jpg
The first recorded censorship: Socrates’ speech on democracy is censored by the Athens Government

Most of us are well versed on how film classification functions; a designated board must view and rate any content before it can legally be made available in Australia. This is such an ingrained form of censorship that many of us would take it for granted, as just part of life. In contrast, online censorship has only recently become a visible, and hotly debated, issue across both corporate and public spheres.

While internet censorship was previously referred to largely in the Chinese context, it is now rapidly becoming a global discussion. Essentially, this censorship involves not only control over internet access but also over the functionality and contents of websites (Eriksson & Giacomello 2009, p.206). The main concern cited by researchers is the potential for “mission creep”, or the tendency for governing bodies to enlarge the scope of formal censorship after it is first introduced (Villeneuve 2006, p. 1). This does not just relate to the actions of national governments but extends further to multinational corporations and media organisations.

economic_censorship__stephanie_mcmillan.jpg
(McMillan 2011)

Is censorship ever necessary? If it is, what threat does it pose to free speech?

For an example of non-governmental censorship in action, we can look to video sharing platform YouTube’s recently updated community guidelines. The Google-run companies’ previous policy was typical: no nudity, no portrayals of extreme violence, no hateful content and no videos which infringe on copyright laws (Community Guidelines 2017).

Following a number of tragic terrorist attacks this year, members of the public began to call for the guidelines to be revised after claims that extremist content available on YouTube was being used for radicalisation. The internet has long been known as a critical tool for terrorists and extremists to recruit, communicate and share information among themselves. Internet giants such as YouTube have been working for years to try to contain extreme content on their sites, though many have criticised them for not doing enough.

While many of these videos promote offensive viewpoints, they often do not violate YouTube’s terms of removal. In June 2017, the company announced a censorship policy aimed at curbing the flow of extremist content on the platform.

Under the change, Google said offensive videos that did not meet its standard for removal would come with a warning and could not be monetised with advertising, or be recommended, endorsed or commented on by users. Such videos were already not allowed to include advertising, but they were not previously restricted otherwise. This will make the videos almost impossible to find, a move which Google’s senior vice president Kent Walker claims “strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

It is interesting to note the relationship between the online and seemingly ‘detached’ media space of YouTube and real-life actions and audiences. The severity of YouTube’s stance indicates a belief that viewers are not consuming content as mindlessly as once thought.

What do you think?

Is the increased censorship of YouTube a positive step towards counter-radicalisation? Is it necessary?

 

References:

Balkin, JM 2008, ‘The future of free expression in a digital age,’ Pepperdine Law Review, vol. 36, p.427.

Eriksson, J & Giacomello, G 2009, ‘Who controls what, and under what conditions?’ International Studies Review, vol. 11, no. 1, pp.206–210.

Villeneuve, N 2006, ‘The filtering matrix: Integrated mechanisms of information control and the demarcation of borders in cyberspace,’ First Monday, vol. 11, pp.1-2.

Community Guidelines 2017, YouTube, Accessed 25 September 2017, Available https://www.youtube.com/yt/policyandsafety/communityguidelines.html 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s