September 2022

The controversial return of the UK’s Online Safety Bill 

The inquest into the tragic death of Molly Russell—a British teenager who took her own life in 2017 after viewing self-harm and suicide content on sites like Instagram and Pinterest—has relaunched debates around how to address harmful online content in the UK. And as Parliament convenes next week, reports indicate that the controversial Online Safety Bill will soon be presented for its third and final reading. 

While the bill is highly unlikely to actually deliver a safer experience for children online, it will almost certainly have widespread adverse impacts on freedom of expression and privacy in the UK. GPD has raised concerns over these risks at every stage of the bill’s development—as have children’s rights advocates, human rights organisations and academics. One of our core criticisms of previous drafts of the bill was its requirements for online platforms to remove content which is “legal but harmful”—defined as that which poses a risk of “significant physical or psychological harm”. The intention behind these provisions may well be to protect users—and particularly vulnerable users like Molly Russell—from disturbing content and abuse online. But they would cast the net of prohibited speech incredibly broadly, requiring platforms to censor vast swathes of speech which is not even illegal.  

In light of these concerns, we welcome indications from the new leaders of the governing Conservative party that the provisions on “legal but harmful” content accessed by adults will be removed from the final version of the bill. However, platforms likely to be accessed by children will apparently still be obliged to make such content inaccessible. Given that virtually all platforms could, in practice, be accessed by children, this broad requirement will essentially force platforms to choose between two undesirable options in order to comply:

  1. Treating all users as if they are children and simply removing any content on the platform which might be considered “harmful”—returning us to the freedom of expression and censorship concerns outlined above; or
  2. Finding some way of reliably identifying underage users in order to either block them from the site or restrict the content they can access, introducing considerable risks for users’ rights to privacy and other human rights. (See our response to Ofcom’s recent call for evidence on its regulatory duties for a more detailed assessment of the dangers associated with age verification). 
The window for influence on the Online Safety Bill is narrowing, but there are amendments that the UK Government could make which would significantly improve the bill. As part of the Save Online Speech coalition, we’re calling for the “legal but harmful” category to be dropped in its entirety, not just in relation to content accessed by adults; for more independent oversight over the powers granted to the Secretary of State for the Department of Culture, Media and Sport to define further categories of regulated speech; and for the requirements for platforms to proactively scan people’s private communications to be removed. These amendments would bring the Online Safety Bill more in line with better-calibrated pieces of platform regulation, such as the EU’s recently passed Digital Services Act (which imposes obligations on platforms to remove only illegal content and includes clear safeguards around Member States’ regulators’ enforcement powers.)

There is a clear need to address the adverse human rights impacts of widely disseminated legal but harmful online content, such as disinformation, offensive speech or self-harm imagery, as in the sad case of Molly Russell. Yet addressing the harms caused by these diverse content types in a rights-respecting fashion will require far more than a catch-all clause in an online safety regime. As policymakers around the world—not just in the UK—grapple with these complex issues, consultations with human rights experts, technologists, trust and safety professionals and, most importantly, the communities most affected by each type of content, will be vital for designing targeted, proportionate and rights-respecting solutions.
Digital issues at the UN: an update

September was a busy month for cyber-related discussions at the UN. A few developments to highlight: 

  • The Office of the High Commissioner for Human Rights (OHCHR) published a widely anticipated report on the right to privacy in the digital age, which identified three main areas of concern: the abuse of intrusive hacking tools (‘spyware’) by state authorities; threats to strong encryption; and the impacts of widespread digital monitoring of public spaces, both offline and online. The report features arguably the strongest endorsement of encryption ever issued by the OHCHR, heavily drawing on joint input of the Global Encryption Coalition (GEC), of which GPD is a member.
  • The 2022 ITU Plenipotentiary wraps up tomorrow. Our Head of Global Engagement and Advocacy, Sheetal Kumar, has been in attendance—stay tuned for her insights and analysis next week. In the meantime: three reasons why human rights defenders should be paying close attention to the ITU.
  • Cybersecurity, cyber threats and disinformation were dominant themes across discussions at the 77th Session of the United Nations General Assembly  (13 - 27 September 2022). Several states noted the damaging impacts of online disinformation and cyberattacks, particularly within the context of the war in Ukraine (as we highlighted in a recent piece); with many calling for further action to advance peace and security in cyberspace, including via the Ad Hoc Committee on Cybercrime and Open Ended Working Group on ICTs. The UN Secretary-General, António Guterres, specifically referenced ‘hate speech, misinformation and abuse’ in his statement; while Special Rapporteur Irene Khan published a report examining the challenges posed by disinformation to freedom of opinion and expression during armed conflict. The Freedom Online Coalition (FOC) also co-hosted a side session on ‘Upholding Democracy and Human Rights in the Face of Rising Disinformation’.
  • On the margins of the Human Rights Council’s 51st session (12 September - 7 October), the 2022 Chair of the Freedom Online Coalition (FOC), Canada, co-hosted a multistakeholder roundtable with GPD on how the obligation of non-discrimination (particularly on the grounds of gender and race) applies to the risks posed by digital technologies, including AI. Key findings from the event will be published soon. There were also three resolutions of note that were adopted during this session: on countering cyberbullying (A/HRC/51/L.17); neurotechnology and human rights (A/HRC/51/L.3); and the human rights implications of new and emerging technologies in the military domain (A/HRC/51/L.25).
Copyright © 2022 Global Partners Digital.
All rights reserved
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Global Partners Digital · Second Home · 68 Hanbury St · London, E1 5JL · United Kingdom