Governments are very bad at internet regulation. That much has been clear for a very long time – scholars and others who look at the subject generally wince when they hear about a new plan to ‘rein in the Wild West’ or something similar. It is thus with trepidation that many of us are preparing for the attempts to ‘regulate’ further after the key role that social media seems to have played in the riots that followed the brutal child murders in Southport on 29th July. ‘Something must be done’ is the cry – it seems highly likely that something will be done, but whether something should be done, and if it should, what should be done are really not such easy questions.
Regulating social media is difficult
The first and most important thing to be clear about is that there are no easy answers here. Social media regulation is difficult. Freedom of speech is difficult. Anyone who suggests that there are easy solutions is primarily demonstrating that they don’t understand either social media or freedom of speech – but that of course won’t stop the suggestions. So far, amongst other things people have suggested banning Twitter/X in the UK (which would be extremely authoritarian with nasty repercussions) and removing anonymity (one of the absolute worst ideas, as I will try to explain below). ‘Magic wand’ solutions are often offered: they almost never work, and almost always have very severe side effects. They make things worse, not better.
What was the real problem?
When you are trying to suggest ‘solutions’, the first thing to do is to try to understand what the problem is – and also what the problem isn’t.
In this case, soon after the news of the horrendous murders came out, a false story appeared on Twitter/X that the murderer was a Muslim, an asylum seeker, and had recently arrived on one of the small boats that have been crossing the channel. That story was amplified by some big accounts on Twitter – including politicians, ‘influencers’ and people in the media, causing furious anger and calls for action. Those calls for action led to the riots – organised apparently through social media, supported and illustrated with pictures and videos pushed through social media.
So there are a number of stages involving social media that are involved, three of which are key:
The creation of the false story – it is still not entirely clear who did this, though at least two sources have been mentioned, one a woman in the UK, the other a (fake) news site that appears to have its base in Pakistan.
The spreading of the false story – a lot of people and accounts are involved here, from the very big and prominent to the small, in some cases anonymous accounts.
The organisation of the riots on the back of this widespread false story – largely on social media, though almost certainly other communications methods were also involved. It is not hard to envisage drunken discussions in pubs having a certain impact.
The first of these, the creation of the story, is very difficult to stop happening. People can write what they want – that’s the nature of freedom of speech and the way that social media works. In certain circumstances it can already be criminal, and I understand that the woman in the UK has already been charged under S179 of the Online Safety Act 2023, ‘False communications offence‘. Whether this charge will stick is another matter – this is a new and largely untested law, and s179 has some criteria that may make it difficult.
S179 says:
“A person commits an offence if—
(a) the person sends a message (see section 182),
(b) the message conveys information that the person knows to be false,
(c) at the time of sending it, the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience, and
(d)the person has no reasonable excuse for sending the message.”
Most of that would cover the creation of such as story if the creator hoped to cause harm – but did they? And was the harm they intended to cause to a likely audience? Potential rioters themselves may not be harmed, but want to cause harm to others. It will be interesting to see how this pans out – it may be a good test of the law.
Having said that, it isn’t really the key point. There will always be people who try to spread rumours like this – rumours that they want to be true, if not in the detail but the substance. They wanted it to be a Muslim asylum-seeker recently arrived by boat. That, of course, is part of the key to successful fake news: create news that people want to believe, and they’re not just more likely to believe it, but more likely to spread it.
It’s about the big accounts
And it’s the spreading that’s the key – there are vast numbers of rumours and fake news out there, but the only ones that have an impact are those that are spread to wide audiences – and the key to that is getting big accounts to spread them. An anonymous account with 20 followers makes no difference. A known account with 2 million followers makes an enormous difference. In the way that Elon Musk’s twitter works, if that account is verified – the paid-for blue ticks – that helps too, as Twitter/X prioritises tweets from verified accounts. Even Nigel Farage recognised this, as he tried to lay the blame on (other) big accounts such as Andrew Tate.
These big accounts are also the key to the third part of the problem – the organisation of the riots. If a small, anonymous account suggests a meeting, it won’t have much effect, but if an influential ‘leader’ suggests it, crowds will turn up.
This has a number of implications – the most important of which is that thinking that what matters are the small accounts, the anonymous accounts, is to fundamentally misunderstand the problem. It’s like trying to cure measles by painting over the spots, one by one. What you need is (a) a vaccine and (b) some way to stop the spreading. Both of these mean you have to deal with the big accounts. Any ‘solution’ that starts by dealing with the small accounts, or dealing with anonymity, is not just bound to fail but will have devastating consequences for the people who rely on anonymity for their own protection – people like the victims of spousal abuse, like children who have escaped abusive parents or are the victims of cyberbullies, like people with names that indicate their religion or ethnic backgrounds. If Islamophobia or anti-Semitism is on the rise, forcing people to label themselves as Muslims or Jews by virtue of requiring real names could have horrible consequences.
We should therefore avoid attacking anonymity or pseudonymity at all costs. Instead, if you really want to deal with this kind of problem, there needs to be a concerted attempt to deal with the big accounts. This might be through the law, or might be through some other kind of sanctions. MPs who are involved in this could be sanctioned by parliament. Broadcast journalists by Ofcom, and so forth. This might be easier than creating new law – amongst other things because how you frame the law would be very troublesome, and almost certain to have unforeseen consequences as well as suppressing free speech.
What about the social media companies?
The first thing to be clear about is that this is not about ‘rogue’ algorithms or ‘misuse’ of social media. This is how the algorithms are intended to work – sending content that people are interested in to them. It would also be highly unlikely to get the social media companies to sanction the big accounts of their own accord – these are the accounts that drive engagement, that support the companies’ business models. And, if Elon Musk’s recent behaviour is anything to go by, he’s every bit as likely to want to magnify these stories and support these accounts rather than oppose them.
The Online Safety Act tries to impose a ‘duty of care’ on social media companies, but care for whom? In this case, the harm done is not to their users, but to others. Do they have a duty of care for everyone? It is hard to see how this would work in practice.
There’s already plenty of law…
The last question is whether we need more law anyway. There’s already a lot of law out there. When the dust settles, we’ll see that people have been prosecuted under public order legislation, under malicious communications legislation, for communications offences, and so on. Punishing those who actually riot is not going to be a problem. Punishing those who used social media to instigate these acts is not likely to prove a problem either.
Punishing those behind the acts is another matter. It seems notable to me that of the many proposals being mentioned by politicians so far, none seem to be even trying to hold those whose rhetoric, both online and offline, have made it all happen, to account. Until and unless they do, the rest is all irrelevant.