Copy
April 2022

Government responses to online disinformation across Sub-Saharan Africa 

When governments impose harsh criminal restrictions on sharing disinformation online, what are the consequences for human rights?

  • In Zimbabwe in January 2021 prominent political activist Hopewell Chin’ono tweeted that national police had killed an infant during enforcement of COVID-19 lockdown rules. He was arrested on charges of sharing false news, denied bail, and detained for four months before being released. 

  • In Uganda, Jamilu Ssekyondwa sent messages to his friends in July 2021 suggesting that President Museveni had died. He was arrested for spreading false and offensive communication, refused bail, and detained on remand for eight months. 

  • In Cameroon, director of private newspaper CliMat Social Emmanuel Matip published a story about an alleged coup plot in July 2020. After being arrested and charged by a military court for “spreading fake news”, he was detained for 16 months before his eventual acquittal and release. 

These are, unfortunately, not the only instances of journalist harassment and restrictions on freedom of expression under the guise of “tackling disinformation” that have occurred in recent years in Sub-Saharan Africa. Out of 48 countries in the region, 29 have taken law enforcement action against groups or individuals linked to charges of disinformation, and 44 have laws in force which prohibit the sharing of disinformation. Nine of these laws have been proposed or passed since the beginning of the COVID-19 pandemic, as we detail in the Listening Post below. By and large, these laws are overbroad in scope, impose disproportionate penalties, and–as the examples above demonstrate–pose significant risks to individuals’ right to freedom of expression. 

In response to this trend, in collaboration with the Centre for Human Rights at the University of Pretoria (CHR), Article 19 West Africa, the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) and PROTEGE QV, we launched a new tool—LEXOTA—on World Press Freedom Day. LEXOTA includes in-depth legal analysis of over 90 pieces of legislation from governments in the region that impose restrictions on online disinformation, and unpacks over 80 instances of governments and authorities enforcing these restrictions in ways detrimental to freedom of expression, press freedom and digital rights. By shining a spotlight on these policies and incidents, LEXOTA serves as a hub of information for human rights defenders in the region, aiming to inform both regional and national advocacy strategies.

Explore LEXOTA’s country profiles, comparison features and key resources on rights-respecting responses to online disinformation. Register for the launch event on 10 May to hear from experts about the challenges posed by government responses to disinformation and how LEXOTA will be used in practice.

State inputs to the Chair of the AHC 

At the Ad-Hoc Committee on Cybercrime (AHC), April saw states provide inputs to the Chair on the three elements of the Cybercrime convention that will be discussed at the next meeting in May: General Provisions, criminal provisions and law enforcement and safeguards. The inputs vary in specificity, with many offering specific text and/or wording for definitions of key terms. This means that Russia’s input (which is also on behalf of Belarus, Burundi, China, Nicaragua and Tajikistan) is no longer the only input on the table with text provisions. However, Russia’s is one of the most detailed. As expected, it criminalises a wide range of offences including “the creation and use of digital information to mislead the user”, and does not include adequate safeguards. 

From a human rights perspective, as ever the devil is in the details. Many NGO inputs to the AHC so far have reiterated how important it is that the convention focus on crimes against computer systems, not content based offences. However, most of the state inputs (apart from the European Union) largely recommend that the convention should criminalise content based offences,  although there are differences in opinion over whether the definition of such offences should be broad or narrow in scope. Some (US, UK, Australia) suggest this should be select crimes for which the scope, speed, and scale of the offence are substantially enhanced by the use of a computer (in keeping with the Budapest Convention). Others (Mexico, Brazil, Malaysia) suggest a broader scope; Mexico, for example, wants States Parties to “recognize as crimes for the purposes of this Convention, all criminal acts recognized by the existing International Law that are perpetrated by information technologies and electronic means.”

There is general agreement that the scope should include the criminalisation of child sexual abuse material or “online child abuse offences”, and many also support addressing offences against women and children, such as sex trafficking (e.g. UK). 

Most others reiterate the importance of human rights protections, with many specifying human rights instruments and relevant protections in their proposals for the chapter on “conditions and safeguards”. Ideally, from a human rights perspective, human rights should be included as an objective of the proposed treaty and included in its general provisions and preamble as well as integrated into the convention’s chapters. Yet there is no agreement over the inclusion of a specific section dedicated to human rights safeguards. This has, so far,  been only briefly discussed, and is not listed for discussion under the agenda for the second substantive meeting at the end of May recently published by the Chair. This begs  the question of when and how it will be discussed—and what the process for deliberations on the text more generally will look like.

The inputs shared so far illustrate just how difficult it will be for countries to agree on any substantive areas, and also the huge risks for human rights that could arise from any agreed text and compromises reached. 

 

The European Union’s Digital Services Act 

The European Union reached a political agreement on the long-awaited Digital Services Act (DSA), which aims to update the EU’s legal framework for intermediary liability for illegal content, and introduce new obligations on online platforms around their content moderation policies and transparency reporting practices.

In our previous analysis of the draft text, we commended the legislation for its safeguards for individuals’ privacy and freedom of expression, and its graded approach for platforms of different sizes. An amendment requiring adult content platforms to implement mandatory phone registration for individuals uploading content was struck down—seen as a win for online anonymity and sex workers’ safety—while a few other last-minute changes introduce new human rights concerns, including the addition of a vaguely-worded “crisis response” provision, the rejection of proposed safeguards for encryption and the weakening of the provisions prohibiting dark patterns and the use of sensitive data for online advertising. Concerns remain over whether companies can invoke “trade secrets” clauses to escape the new transparency requirements and deny data access requests, and over whether the text will allow Digital Services Coordinators (tasked by each member state with supervising and enforcing compliance with the DSA) to impose disproportionate responsibilities on smaller platforms, potentially stifling competition and diverse expression. 


Whilst the broad approach of the DSA remains positive, these technical details—some of which are still to be finalised in the remaining discussions—will likely be the battlegrounds for freedom of expression and right to privacy in the region for years to come. We’ll monitor closely for further developments, and analyse the final text once it’s published; in the meantime, we’ve found commentary on the agreed text by Access Now, Algorithm Watch, Amnesty, EU Disinfo Lab and Tech Policy Press useful. 
  

Conclusion of OEWG stakeholder modalities debates

This April, the long stakeholder modalities saga at the Open ended working group on responsible state behaviour in cyberspace (OEWG) concluded with the adoption of stakeholder modalities by consensus. This follows the continued disagreement about how stakeholders should be able to engage in OEWG discussions—which limited the proceedings of the second substantive session of the OEWG to informal mode—and the unprecedented tabling of a resolution at the UN General Assembly by the UK (which would have broken with the consensus approach so far and made  meaningful progress on key topics very unlikely). The UK’s resolution was withdrawn after numerous bilateral meetings, informals, and side meetings, and the newly agreed modalities will come into effect with the third meeting of the group in July.

The agreement provides for more transparency on the rejection of non-ECOSOC accredited NGOs when they apply for sessions, and clarifies that accredited stakeholders will be able to attend formal OEWG meetings, make oral statements during dedicated sessions, and submit written inputs to the OEWG webpage. Nonetheless, some countries—including Russia—have already indicated that they will continue to use the veto as they like, potentially resulting in a repeat of the blanket vetoes of non-ECOSOC accredited NGOs. It will therefore continue to be vital for member states and stakeholders to not only call out this behaviour if and when it happens, but also utilise informal methods of engagement, including national and regional consultations to input. Our guide, published at the outset of the first OEWG, offers some examples and tips.

This continued possibility of blanket vetoes also means that the proposed “Cyber Programme of Action”, a proposal for a permanent mechanism at the UN First Committee to discuss state behaviour in cyberspace, may offer a more promising avenue for stakeholder engagement in these discussions at the UN. After slow progress, to some extent stymied by the inability to meet in person but also due to disagreements over whether and how the mechanism differs from the OEWG, some of the Programme’s co-sponsors will host a multi-stakeholder workshop on the PoA in Geneva in May, with the aim of garnering stakeholder input on the proposal, and generating further support for it. Open to all, you can register here
In other news
  • GPD has launched two new tools to support the development of human rights-respecting cybercrime legislation and national cybersecurity strategies. Building on our longstanding work in this area, the tools offer a comprehensive framework for assessing the different elements of cybercrime legislation and national cybersecurity strategies from a human rights perspective, as well as examples of good practice, and other considerations.
  • In June, GPD will co-host a session at RightsCon with Freedom House entitled “Putting the internet back together again”, focusing on the human rights risks of internet fragmentation and the principles needed for an open, interconnected and interoperable internet. We’re also working with ICANN to deliver a RightsCon session entitled “Throwing the internet into reverse?”, and with the Global Encryption Coalition Steering Committee and the Internet Governance Forum to organise two community labs focused on encryption advocacy and building community amongst cybernorms practitioners, as well as a social hour called “Let’s save encryption!” To view the full program of events, and to get your ticket, visit the RightsCon2022 website here. Registered participants will be able to log-in to the platform, view the full agenda, and build their own personal schedule from 23 May. 

  • Executive Director Lea Kaspar moderated an expert panel on Digital Authoritarianism and technology-facilitated threats to journalists and human rights defenders at UNESCO’s World Press Freedom Conference in Uruguay this week. The session was organised by the Dutch and Canadian governments, in collaboration with the Media Freedom Coalition and the Freedom Online Coalition.

Listening Post

In this special LEXOTA edition of the listening post, we focus on the most recent  legislation and proposals imposing restrictions on online disinformation in Sub-Saharan Africa. Many of these are vaguely worded, and impose harsh and disproportionate penalties on individuals, posing considerable threats to human rights. Explore the full analysis of each law in our new tool, LEXOTA: 

  • The Central African Republic’s Law N° 20.027 on Freedom of Communication criminalises the sharing of false news which “disturbs the public peace” or “is likely to shake the discipline or the morale of the armies or to hinder the war effort of the nation”.  

  • In eSwatini, the proposed Computer Crime and Cybercrime Bill would impose a penalty of up to ten years’ imprisonment for publishing, through any medium including social media, any statement or fake news “with the intention to deceive”.

  • Lesotho’s 2020 Internet Broadcasting Rules update the 2004 Broadcasting Rules for media organisations to include online publications, including requirements for information to be presented without distortion, exaggeration or misinterpretation. The Lesotho Communications Authority – whose members are appointed by the government – is charged with making determinations on compliance with these rules.  

  • Mauritania is one of the only countries in the region to pass legislation specifically targeting online disinformation. Law No. 2020-015 on the fight against the manipulation of information imposes disproportionate penalties from 3 months to 5 years for sharing false news; these penalties may even be applied in instances where individuals were unaware that the information was false and without intent to harm.

  • In Mozambique, the proposed Social Communication Law – which is to be discussed in Parliament later in 2022 – would prohibit the publication of “false news or unfounded rumours” if doing so jeopardised the public interest or law and order.

  • The draft Bill on the Framework of the Use of Social Networks in Senegal appears to penalise the disclosure or publication of “misleading” information with imprisonment from one to three years and a fine of up to 1 million francs CFA (USD 1,718). The high minimum penalty raises particular concerns for this law’s impact on freedom of expression in the country. 

  • Sierra Leone’s recently passed Cyber Security and Crime Act provides that any person or corporation found to have sent, whether recklessly or intentionally, a false message by a computer system or network with malicious intent is liable to a fine of up to 4,000 USD and/or imprisonment from two to five years. 

  • Togo’s 2020 Law related to the Press Code and Communication, passed in 2020, prohibits the dissemination or publication of information “contrary to reality” for the purpose of manipulating consciences or to distort information or facts, punishable with either a fine or a suspension of a broadcasting or publishing licence.

  • The new Data Protection Act in Zimbabwe makes the sharing of false information via a computer with intention to cause harm a crime, punishable with fines or prison sentences of up to five years.
Twitter
Website
Copyright © 2022 Global Partners Digital.
All rights reserved
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Global Partners Digital · Second Home · 68 Hanbury St · London, E1 5JL · United Kingdom