“In an attempt to weed out the worst parts of the internet, the Online Safety Bill actually jeopardises the best parts of the internet.”
Lucy Crompton-Reid & Rebecca MacKinnon, 2023

Quote written by the chief exec of Wikimedia UK (Lucy) and the Vice President of Global Advocacy at the Wikimedia foundation (Rebecca).

In recent years, social media platforms have become the key ingredient in enabling many forms of (violent) extremism and hate content to flourish across the globe. For the UK, since the periods of lockdown resulting from the Covid-19 pandemic in 2020, widespread concerns have been expressed about a rise in newer and so-called ‘post-organisational’[1] groups and communities online. This trend can be seen across the ideological spectrum, from Inceldom and misogynistic-driven violence within the ‘manosphere’, to conspiracy theories relating to QAnon and ‘The Great Replacement’, as well as individuals who espouse anti-government and anti-immigration sentiments.

[1] To note, post-organisational refers to (violent) extremism and/or terrorism in which individuals’ membership of and support for particular groups is more ambiguous.

Whose job is it to keep the online space safe? ‘Big Tech’ companies or government?

Efforts by the UK Government to address this have culminated in the new Online Safety Bill, which having passed the final parliamentary stage of the House of Lords last week, will shortly become law. Whilst so-called ‘big tech’ companies have already developed their own internal guidelines and terms of service around what is considered ‘harmful’ and ‘hateful’ content, the efforts of some companies are still geared towards combating violent extremism and terrorism from organised Islamist-related threats such as from ISIS and Al-Qaeda. Notwithstanding debates about the efficacy or otherwise of ‘big tech’ companies to remove offensive content that incites hatred and/or violence, understanding precisely who is responsible for the safety of young people online and the role of existing structures, which attempt to control the growing normalisation of online extremist attitudes is of great importance.

Countering the normalisation of online extremism – current approaches

Notably by 2014, it was noted that there were over 10,000 terrorist websites, with an estimated audience of 3 billion linked to forms of ideology including Al-Qaeda, Boko Haram and ISIS (Brown and Pearson, 2018). To counteract this, the most prominent approaches to content disruption and removal are Europol’s Internet Referral Unit (IRU); the UK’s Counter-Terrorism Internet Referral Unit (CTIRU) – which has removed 300,000 pieces of terrorist material since 2010 (Home Office, 2018) – and the Global Internet Forum to Counter Terrorism (GIFCT) set up in 2017 by Facebook, Microsoft, Twitter and YouTube. In addition, following the Christchurch mosque attack in 2019, the governments of France and New Zealand founded Christchurch Call – a global initiative to eliminate terrorist and violent extremist content online. It has since grown to include 55 governments (including the UK); 14 online service providers (including Google, Meta and Twitter); and a number of civil-society groups.

Launched in 2015, the role of Europol’s IRU is to monitor and reduce the level and impact of terrorist and extremist content online. To date, however, the IRU focus their efforts predominately on the production of online propaganda by three terrorist groups and their supporters: Islamic State, Al-Qaeda and Hayat Tahrir al-Sham. Aside the potential problems associated with Europol’s IRU unsystematic focus on Islamist-related extremism and/or terrorism online, 86% of content flagged for referral does result in removal.

For the GIFCT, their aim is algorithmic content removals on large social media platforms. For example, Twitter within the first six months of GIFCT’s implementation, suspended 299,649 accounts for violations related to promoting terrorism (Twitter, n.d.). In the latest available figures from January 1 – June 30 2022, Twitter required users to remove 6,586,109 pieces of content, took enforcement action on 5,096,272 accounts, and suspended 1,618,855 accounts for violating a number of different Twitter Rules.

Meta (the parent company of Facebook, Instagram and other social media platforms) release quarterly updates – known as transparency reports – on content activity and removal for a number of considerations including child endangerment, hate speech, violence and incitement and dangerous organisations such as terrorism and organised hate. In particular, in 2022 Facebook removed 56.2 million instances of terrorism content, with on average 99% of all content being found internally and only 1% reported by users. In the same year, Instagram removed 1.68 million pieces of content, with on average 92.5% of content found internally and 7.5% reported by users.

While such direct forms of action are favoured by some, for others the disruption of online networks acts as a ‘whack-a-mole’ effect. Despite interventions made by the IRU, CTIRU and GIFCT, the power of copying, redistribution and livestreaming of (violent) extremist and terrorist content allows for continuous and instantaneous virtual replays. Equally, many individuals, groups and communities are also able to remain active on alternative social media fringe platforms, viewing suspension on mainstream social media platforms as ‘online martyrdom’.

UK’s Online Safety Bill – allegedly the ‘safest place in the world to be online’.

In an attempt to protect children, young people and adults online, the UK Government have introduced into legislation the Online Safety Bill; first unveiled in a white paper in 2019 and – at the time of writing – having recently passed through the House of Lords, is awaiting Royal Assent to become law. In particular, the Online Safety Bill requires social media platforms to consistently enforce their terms of service, and gives The Office of Communications (Ofcom) – the designated government regulator – the power to fine (up to 10% of global revenue) or prosecute companies that neglect to remove and prevent young people from accessing harmful and/or illegal content. The full list of harmful and/or illegal content is detailed below:

Illegal content

  • child sexual abuse
  • controlling or coercive behaviour
  • extreme sexual violence
  • fraud
  • hate crime
  • inciting violence
  • illegal immigration and people smuggling
  • promoting or facilitating suicide
  • promoting self-harm
  • revenge porn
  • selling illegal drugs or weapons
  • sexual exploitation
  • terrorism

Harmful Content

  • pornographic content
  • online abuse, cyberbullying or online harassment
  • content that does not meet a criminal level, but which promotes or glorifies suicide, self-harm or eating disorders

Notably, transparency commitments for platforms’ algorithms and regulations business models could also help. In its current form, the aim of the Online Safety Bill is to reduce harm on social media platforms. Despite the term ‘harm’ being rather broad and wide-ranging (as intended by the UK Government in order to capture issues such as disinformation, worsening mental health and many modes of extremism amongst others), these issues are all amplified by algorithms that curate and recommend content. The goal, to keep us online for longer.

Is it workable?

In particular, the Online Safety Bill has been criticised for not prioritising platform and algorithm design, instead focussing on content-based moderation and/or removal. Indeed, the power and danger of algorithms in contributing to and guiding young people towards (violent) extremist and hate content, particularly the ‘manosphere’ on YouTube and the ‘dark side’ of TikTok have been well noted.

Indeed, commentators have raised concerns about the Online Safety Bill, including potential privacy implications of a tool called client-side scanning (CSS), which would scan data and information on a phone before they are encrypted and sent. WhatsApp in particular – the number 1 ranked most popular instant messaging app with 2.7 billion monthly active users – are pre-emptively refusing to comply, voicing criticisms around privacy and security. Wikipedia too – an invaluable encyclopaedic website – has followed suit in outlining concerns and potentially revoking access in the UK.

Notwithstanding many of the key concerns expressed about the bill in its current form – including in relation to human rights and privacy – any amendments must also seek to readily and specifically afford better protection for women and girls, which the bill in its current iteration fails to do.

Practical and key considerations

Ultimately, as has been detailed in this discussion, content moderation and/or removal is not a straightforward task. In practice, the future implementation of the Online Safety Bill in its current form, casts doubts on the level of privacy and security that will be afforded to users of social media platforms and the internet. Whilst the types of harmful and illegal content targeted are all necessary and appropriate, it must be considered to what extent the fear of false positives is plagued by the fear of false negatives. Put simply, given the above concerns raised by global companies, the Online Safety Bill presents a double-edged sword, between ensuring safety and security online, and of operating a perceived culture of surveillance with ‘Big Brother’ always watching.

I’m just an educator. Does it impact me?

This being said, whilst the Online Safety Bill is yet to be finalised, there are many practices that teachers, educators and practitioners alike can engage with to ensure you are well informed.

  • Digital literacy and staying safe online

Provide young people with practical guidance on how to stay safe online, including how to recognise and report extremist and hateful content online. This also includes information on how to fact-check against disinformation, conspiracy theories and extremist recruitment from charismatic individuals. In addition, critical thinking and solution-focused thinking that centres around understanding the sales pitch, and how groups exploit vulnerabilities, emotions and desires. This can be bolstered by viewing extremism in line with other forms of exploitation especially the practices used by gang-related activity such as child criminal exploitation and illicit drug trafficking.

  • Raise awareness about existing capacities

Whether as a teacher or educator at a school, college or other educational provision, it is important to ensure that all staff are aware of internal and external support services to support young people. Behavioural intervention teams and local community-based teams should ensure they are aware of the contact procedures to enable young people to receive the appropriate support in a timely manner.

  • Identify resources and training

There are numerous resources that the public and practitioners can use to learn more. These range from thematic material that will look at different types of concern more broadly such as academic articles and reports; practical resources that focus on structured responses to targeted types of concerns such as infographics, toolkits and training modules; and behavioural resources that identify specific behaviours and processes that may indicate a young person’s vulnerability to violence and/or radicalisation. Websites such as ISD GlobalCrest ResearchEducate Against Hate, as well as ConnectFutures’ own are all excellent sources of knowledge and information for teachers, educators and practitioners alike.

How can ConnectFutures help?

We at ConnectFutures provide expert facilitators to deliver workshops in a variety of forms and to a diversity of audiences across the UK. These workshops are with the aim of increasing knowledge, resilience and confidence on topics relating to disinformation, extremism, anti-racism, hate and violence. Specifically in relation to the issues discussed here, we deliver workshops for young people (Year 5-13) educators and professionals on the following: Healthy and Happy relationships inc misogynyfake news, conspiracy theories and truthSafeguarding Against Online Extremism (SAVE)Incels, misogyny and the manosphere. Rise of far right, mixed ideology and hateful extremism and Race, privilege, and Justice.