Will social media face greater regulation in the UK?

10/04/2018

Charles Ansdell

The recent Cambridge Analytica/ Facebook scandal, coupled with concerns about political interference, terrorism and crime, is pushing social media into the regulatory crosshairs, argues Charles Ansdell.

Will social media face greater regulation in the UK?

There is no doubt that social media has transformed the world.

The largest platforms – Facebook, YouTube and Instagram – have more than 2 billion, 1.5 billion and 800 million active users worldwide, respectively.

Nearly 30% of the global population – and 68% of those who have internet access – use Facebook. Its audience is greater than the individual populations of China, India, Europe and North America.

Its capability to transcend borders and physical barriers marks this technology as remarkably powerful. It has spread new ideologies, catalysed political movements, spread fake news and connected billions of people.

Moreover, it has created asymmetrical communications, allowing the less powerful and influential to communicate widely. In doing this, it has reduced the authority of conventional voices and media.

This in turn has led to many countries and politicians to fear and abhor social media. In countries such as China, which has created the “great firewall of China” to prevent its citizens accessing social media, it is seen as subversive and dangerous.

Many other autocratic states view social media as a virtual manifestation of political opposition and action groups that they physically seek to suppress. Indeed, many have placed limitations on their citizens accessing social media (though in reality, their citizens can use virtual private networks to flout the rules).

Internet freedom as a cornerstone of democracy

Conversely, Western societies have historically adopted the philosophy that social media should, like the rest of the Internet, be a democratising and predominantly open technology.

At the root of this concept is that social media is merely an enabling technology, and that those producing and publishing content on social media should be responsible for it. This contrasts with traditional media platforms, which are liable for the content they distribute.

However, this on its own raises an important existential question. If social media companies make money off the content from their platform, should they not be responsible for policing it? Are they the right people to be policing it?

Ultimately, most politicians have realised that pragmatically, it is impossible for government to regulate the comments, content and views of over a billion people. They have pressured social media platforms to police their platforms, particularly in relation to stamping out online abuse, pornography and terrorist groups.

To a large extent, social media companies have played along, walking a tightrope between censorship and free speech.

Escalation to regulation

Over recent years, government thinking on social media regulation has shifted radically. The arguable origin of this is the ongoing global fight against religious extremism. The use of social media to organise and coordinate terrorist activities, coupled with its power as a recruitment and propaganda tool, has led to media and political opprobrium.

In turn, the capability of social media to influence and engage audiences without the moderating lens (and indeed Leveson regulations) of conventional media has meant that “alternative” ideologies can be propagated.  

Arguably this has helped stoke the rise in both far right and far left political movements, which has culminated in significant electoral upsets in the US presidential election and the EU referendum. In particular, concerns have centred on political organisations using fake news to manipulate people and fuel acrimony and discord.

Conceptually, it would be perverse to have a media that is subject to strictures on how it reports, while anyone could use social media to publish without any constraints.

At the same time, concerns are growing over the use of data by social media platforms. These centre on two key areas – whether users are aware of how their data is being used or collected, and whether those using the data have permission to do so. Data theft is also a significant risk.

In addition, studies show that social media can be highly psychologically addictive, leading to extreme behaviours commonly associated with gambling.

Finally, many social media platforms have built virtual monopolies that make them difficult to compete with.  

Changing narrative a precursor to regulation

Significantly, the political / media narrative towards social media is hardening.

Take the Mayor of Hackney’s twitter response today to a stabbing in his borough: while blaming reduced policing for crime has been common since austerity was introduced in 2010, blaming the role of social media in gang crime is a relatively new phenomenon.

Well-known pioneers in technology such as Roger McNamee are now calling for regulation, while over half of Americans want to see news regulated on social media. Alex Stubb, the former prime minister of Finland, has publically called for regulation, as has Marc Benioff, the CEO of Sales Force.

The recent Cambridge Analytica scandal has accelerated the move towards social media regulation. Facebook’s Mark Zuckerberg is scheduled to appear before the Senate Judiciary and Commerce Committees on April 10, then appear the next day before the House Energy and Commerce Committee. He has been invited to appear before the Fake News inquiry of the House of Commons Digital, Culture, Media and Sport Committee.

Simultaneously, data regulations such as GDPR are already due come into place (on May 28) which will force greater transparency on data permissions.

What regulation could look like  

There are many options available to policymakers, though broadly speaking regulation is likely to focus on regulating news, regulating use, data privacy and competition.

Regulating news could involve social media platforms signing up to IPSO’s Editor’s code of practice, though these would have to be adapted. Platforms would be held liable for any defamation, breach of copyright or IP or privacy. 

Use regulation could centre on blacklisting of certain organisations that have carried out illegal activity or propagated fake news, though of course this would have to be subject to a system of checks and balances to protect free speech. 

For consumers, regulators could limit the time consumers can use social media, along with blocking content for vulnerable audiences.

Data privacy regulation is already on the agenda. In future there may be a greater focus on communicating clearly to social media users how their data will be used, along with stricter sanctions for those who exceed their data authority. 

The safe harbour agreements that govern how personal data is shared between countries (specifically the EU and US) is likely to come under greater scrutiny and data-sharing could be limited.

Finally, competition regulators could look at the monopolistic or oligopolistic nature of the market. Google and Microsoft have fought a series of anti-trust cases in the EU and US over recent years, a fate that could yet befall social media platforms.

A new paradigm

Ultimately social media platforms may end up being the victims of their own success. They now find themselves in an uncomfortable spotlight, and Icarus-like, burnt by their rapid rise.

It now seems inevitable that some sort of regulation will come – though its form unclear.  What is more certain is that the days of unfettered social media use looks numbered, and that communications professionals will have to adapt to a brave new, regulated world. 

To discuss your social media or digital strategy, please contact Charles Ansdell at ca@redleafpr.com.


 

 

 

 

 

Categories: