the digest  April 2021

AI and human rights: how do the new frameworks measure up? 

As we noted in last month’s Digest, numerous efforts are taking place around the world to shape the way that artificial intelligence (AI) is developed, used and governed. 

April proved a busy month in this regard, with major developments at the Council of Europe, UNESCO and the European Union. While the outputs of these three processes will be different in nature and form, all are ultimately trying to achieve the same thing: to shape national level regulation and governance of AI technologies: 

  • The Council of Europe is doing so via a “legal instrument” which could be endorsed or ratified by states (including states outside of the Council of Europe) and which would provide a set of minimum standards on AI governance, similarly to its instruments on data protection and cybercrime. The instrument is still being drafted, and a consultation period has been extended (but hurry—it closes on 9 May). You can see our consultation response here, and our friends at ECNL have developed a great guide on how to answer the consultation survey.

  • The European Union is developing legislation which would be binding on all EU member states, and potentially a model for others. A draft was published this month which would prohibit certain applications of AI, set out detailed regulatory requirements for “high-risk” AI systems, and transparency reporting obligations for certain other uses.

  • UNESCO has published the latest version of its draft Recommendation, a non-binding but highly detailed instrument setting out recommended policy actions UN member states should take. We just published an initial assessment of the draft Recommendation—read it here.

How do these frameworks shape up from a human rights perspective? One welcome aspect of all three is the recognition of human rights as a fundamental consideration. All explicitly recognise the risks to human rights that stem from AI, and all propose measures to mitigate them. While the level of detail varies, all include (or are likely to include) requirements for greater transparency around AI systems, restrictions on AI systems which carry the greatest degree of risk to humans, and greater responsibility and accountability for companies developing AI technologies.

However, concerns remain:

  • The EU’s Regulation contains only limited prohibitions and limitations on the most harmful AI systems—with a number of exceptions, and weak transparency requirements (as noted by many organisations, including Access Now, AlgorithmWatch, ARTICLE 19 and EDRi), which mean a significant proportion of AI systems would be largely unaffected.

  • The Council of Europe process has yet to develop a draft text—but the recent multistakeholder consultation (which we submitted to) gave respondents only limited input to recommend what the legal instrument should contain. Most of the questions simply asked respondents to “agree” or “disagree” with broad statements or prioritise certain issues or uses of AI that raise concerns. 

  • Finally, UNESCO’s draft Recommendation takes an “ethical” approach, with a lack of clarity on the relationship between “ethics” and “human rights” in the text, the inclusion of unclear and undefined language, and a limited approach towards the need for human oversight of AI. For further analysis, see our blog post on the draft Recommendation.

Regardless of the merits of these processes, a further concern is the risk of fragmentation in human rights protections. While multiple instruments and frameworks on the same issue is not a concern per se, widely diverging standards could lead to “race to the bottom” rather than a universally high level of human rights protection.

To keep track of developments, we’ve upgraded our AI Policy Hub to include an interactive calendar of key events and meetings at these (and other) forums, alongside a comprehensive set of resources and tools to help navigate and understand the issues involved.

Intermediaries face pressure over encryption in India, the UK and Mauritius

Over the last few months, India’s controversial new Intermediary Guidelines have faced increasing challenge from a range of sources.

Back in February, GPD, among many others, raised concerns about the potential human rights risks posed by the (then leaked) Guidelines—which require intermediaries to have access to all traffic on their platforms, including encrypted communications. Since then, three court cases have been brought against the Guidelines: two at the Kerala High Court (including one led by Global Encryption Coalition member, and another in Delhi. 

In likely response to the controversy, the Ministry of Electronics and IT (MEITY)—who made the Guidelines, and is now developing Standard Operating Procedures to implement them—indicated that it is open to receiving questions from civil society. In April, they opened a consultation (with a notably short timeframe) to which civil society groups, including MediaNama, submitted questions—aiming to gather specific and concrete responses on the government regarding the impact of the rules on encryption. It’s not clear yet when MEITY will respond, or how much they will be willing to say.

India’s government isn’t alone in seeking restrictions on encryption. This month, the Mauritian government opened a consultation around new government proposals to regulate social media companies’ deployment of encryption, which may impose the same traceability requirements we’re seeing in India. 

And in the UK, the long-awaited Online Harms Bill—likely coming in June—is also expected to carry damaging ramifications for intermediaries deploying end-to-end encryption. In a recent speech at the annual conference of The National Society for the Prevention of Cruelty to Children (NSPCC), Home Secretary Priti Patel described Facebook’s plans to roll out end-to-end encryption on its Messenger platform as “unacceptable”. The NSPCC itself has previously described private messaging as “the frontline of child sexual abuse online”. As Open Rights Group noted in its response to the conference: “Calls to remove end-to-end encryption, as a means to protect children, disrespect children’s right to privacy as much as that of adults.”

Other news

  • Encryption needs to be defended at the regional level, too. This month, we signed onto a joint statement on the EU’s proposed measures to fight child sexual abuse online, alongside almost 30 fellow members of the Global Encryption Coalition (GEC).

Tackling extremist content: three processes to watch

As we approach the two-year anniversary of the Christchurch Call, measures to tackle terrorist and extremist content continue to be a priority for states—and to pose risks to freedom to expression online. A few recent developments on this front to be aware of:

  • The EU has approved its Regulation on addressing the dissemination of terrorist content online. Dozens of organisations sent a joint letter to the European Parliament warning that it will incentivise platforms to use automated content moderation tools, such as upload filters, which are prone to error and could restrict legitimate speech.

  • New Zealand is currently scrutinising a Bill to tackle illegal and harmful content online, including terrorist and violent extremist content. GPD provided evidence to the New Zealand government and signed a joint statement organised by InternetNZ, both of which raise concerns about the Bill’s proposed use of mandated filtering without sufficient safeguards.

  • Israel’s Supreme Court has rejected a petition challenging the legality of the country’s self-regulatory takedown regime. The court held that the petitioners did not establish a connection between the actions of the government’s Cyber Unit—which issues takedown requests for illegal content (such as terrorist content) to online platforms—and the infringement of any specific constitutional rights. Authorising these extra-legal takedowns has prompted criticism from commentators, who argue it enables the government to suppress speech without due process and transparency.

Other news:
Some important news from Facebook's Oversight Board:

  • The Board has selected a case concerning the removal of a post on Instagram featuring the founding member of the Kurdistan Workers’ Party (PKK), which is designated as a terrorist organisation by the US and EU. Their decision on this may have widespread implications for Facebook’s broader content moderation policies.
  • The Board also just upheld former president Donald Trump’s suspension from Facebook and Instagram following the January 7 ban, but criticised the indeterminate timeframe of the ban as “indeterminate and standardless”. It has asked Facebook to review the decision and justify a proportionate response. We'll be following this closely.

Cyber at the UN: an update

Compared to March—which saw the Open ended Working Group on ICTs (OEWG) adopt its final report, and the penultimate meeting of the Group of Governmental Experts—April was relatively quiet for UN discussions on cyber. 

May and June are, however, expected to be significantly busier, with several important meetings scheduled:

  • The inaugural meeting of the new Third Committee group set up to negotiate a cybercrime treaty (10-12 May). You can watch a livestream of the meeting here.

  • The final informal consultations of the GGE for non-member states (20-21 May): and the GGE’s final, closed meeting (24-29 May)—hopefully ending with a consensus report that complements the OEWG’s outcome report;

  • The first organisational meeting of the new OEWG (1-2 June). This is particularly significant, as it will decide the modalities for the OEWG, including NGO participation. Alongside other groups, we’ll be engaging closely to try and shape those modalities—stay posted.

Other news:

  • Our Senior Programme Lead Sheetal Kumar has published an article in the Journal of Cyber Policy, looking at the role of civil society in implementing cyber norms.

Listening post

Your monthly global update, tracking relevant laws and policies relating to the digital environment.

On the online content side, we saw several important new developments:

  • Ireland introduced the General Scheme Criminal Justice (Hate Crime) Bill 2021—which updates (and toughens) the previous 1989 Prohibition of Incitement to Hatred Act, and makes it applicable for instances of online hate crimes

  • Canada is set to introduce an online safety bill in the coming weeks, which would establish a new online content regulator and include rules about content filters. The bill will focus on five types of harmful content including terrorism, violence, and hate speech, but notably does not cover disinformation. 

  • Mauritius’ proposed amendments to the ICT Act will allow the ICT authority to monitor and censor content on social media platforms through setting up a National Digital Ethics Committee. Attempts to exert government control over the internet is nothing new in Mauritius, and these proposed amendments frankly state that they will "interfere with the Mauritian people’s fundamental rights and liberties, in particular their rights to privacy and confidentiality and freedom of expression.” 

On the trust and security side, cybersecurity and cybercrime legislation progressed in several countries: 

  • Liberia confirmed that the Draft Cyber Security Act of 2021 is due to be presented to the Executive Mansion, after which it will be passed on to legislators for enactment. 

  • In Germany, the Bundestag approved the IT Security Law 2.0. 

  • Civil society organisations have raised concerns about Zambia’s newly passed Cyber Security and Cyber Crimes Act 2021, citing its potential to suppress freedom of expression and the right to privacy. 

Elsewhere, Vanuatu approved its National Cybersecurity Strategy (NCSS); and the UK has confirmed plans to publish its new NCSS later this year. Australia also launched its International Cyber and Critical Tech Engagement Strategy, which features strong commitments to human rights. 

On the emerging tech front, Belarus became the latest country to develop data protection through its Law on the Protection of Personal Data, which comes into force in six months. And Brazil has adopted a National Artificial Intelligence Strategy, the third state in South America to do so after Colombia in 2019 and Uruguay in 2020.

Copyright © 2019 Global Partners Digital.
All rights reserved

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Global Partners Digital · Second Home · 68 Hanbury St · London, E1 5JL · United Kingdom

Email Marketing Powered by Mailchimp