Copy
the digest  August 2021

Some thoughts on Apple’s (postponed) child safety proposals

In recent weeks, we’ve seen a number of major online platforms—including Instagram, YouTube and TikTok—take measures to enhance the safety and security of children. 

But the most controversial proposed change has come from Apple. In August, it disclosed a planned suite of new iOS features, one of which would have enabled the company to scan the iCloud accounts of US users for known images of child sexual abuse material, and report detections to the US National Center for Missing and Exploited Children. A second would also have warned children and their parents if sexually explicit photos are received by or sent to the device.

Apple’s changes were supported by, among others, the US-based National Center for Missing and Exploited Children and, in the UK, the Internet Watch Foundation (IWF) and the National Society for the Prevention of Cruelty to Children, as measures that would help tackle the sexual abuse of children. Susie Hargreaves, CEO of the IWF, welcomed the proposals as a “vital step to make sure children are kept safe from predators”, noting that “while customers’ privacy must be respected and protected, it cannot be at the expense of children’s safety”.

Many human rights and privacy-focused organisations, however, were critical. US-based organisations, including the Electronic Frontier Foundation, the Center for Democracy and Technology and New America’s Open Technology Institute raised particular concerns over  the immediate risks to LGBTQ young people and children in abusive homes as a result of the warnings parents would receive were they to share or receive potentially sexual content. They also raised concerns over risks to individuals across the world, should governments in other countries demand that Apple scan devices for other forms of encrypted content. Indeed, while the proposals focus on US users, a coalition of over 90 organisations from across the world (including GPD) wrote to Apple. The joint letter highlighted concerns that the proposals would amount to a weakening of encryption, creating new risks to human rights, and asked Apple to reconsider its plans and engage more with vulnerable communities and other stakeholders. 

As a result of the backlash, Apple announced on 3 September that it would delay the release of the new features to allow time to ‘collect input and make improvements’.

This episode illustrates the difficulty of addressing illegal and harmful forms of content online without creating risks to privacy and security that many find unacceptable. There are options for tackling the abuse of children online without undermining end-to-end encryption (see, for example, the Center for Democracy and Technology’s recent report on best practices for moderating content on end-to-end encrypted platforms—such as user reporting mechanisms, metadata analysis and potentially machine-learning classifiers). Yet such measures all have limitations, and there are valid concerns around the continued prevalence of such material on end-to-end encrypted platforms, and the challenges that end-to-end encryption can create.

In the absence of a solution that satisfies all parties, one area where there can be consensus is on process. In a piece for World Politics Review, Emily Taylor carefully examined the announcement and proposed rollout of Apple’s new features, noting that “it is difficult to imagine any democratic country implementing such controversial measures without thorough public consultation”. Apple’s own Human Rights Policy—adopted less than a year ago—commits itself to the UN Guiding Principles on Business and Human Rights, which require companies to conduct human rights due diligence and impact assessments, including through consultation with “potentially affected groups and other relevant stakeholders”. 

Little is known about the engagement and consultation that Apple undertook in developing its proposals. However, the fact that so many organisations who have worked on human rights and technology for many years were caught by surprise suggests that there was scope for a more open and inclusive approach. Apple’s decision to delay the rollout of its new features provides an opportunity to engage more broadly—including with stakeholders and groups outside of the US who have also raised concerns. This engagement should include all of those whose human rights would be affected (or potentially affected) by the tools, both those who have concerns over risks to privacy, and children whose rights are currently being violated through sexual abuse.

UN cybercrime treaty discussions: an important development

A new Ad Hoc Committee (AHC) at the UN’s Third Committee will soon be holding negotiations around a treaty on cybercrime. The end of July saw an important development on this front, with Russia putting forward its proposal for a draft Convention.

Its lengthy and detailed proposal suggests including 23 cybercrimes— from malware research to content-related offences related to 'terrorism' and ‘extremism’ and the 'creation and use of digital data to mislead users'. This is very concerning from a human rights perspective. By vastly increasing the number of cybercrimes covered in the only existing multilateral cybercrime convention (the Budapest Convention, which lists nine), and leaving a very wide scope to state parties to define the means of investigating and prosecuting these crimes, the new text would open the door to almost unfettered law enforcement access to data. 

Russia has been spearheading the initiative to develop a new Convention and so it is unsurprising they have so far been the most proactive and detailed in their input. However, this is only the beginning of the treaty process—other UN member states are free to put forward their own counter proposals. They’ve been asked by the AHC Chair to give their views on the scope and aims of the Convention by 29 October—a great opportunity for states engaging in the AHC to reach out to civil society and other non-state actors, and ensure that their crucial input is meaningfully included in the process. We know from the modalities adopted in May that NGOs with ECOSOC status will be able to register to take part in the negotiations, beginning with the first meeting taking place from January 17-28. Non-ECOSOC NGOs may also be able to participate—although this will depend on no state objecting to their participation. 

NGO engagement will be critical to ensuring that the treaty does not become a means to encode dangerous new standards at the international level that could negatively impact human rights. As has been noted recently, there is already much that states can do to address cybercrime in a rights-respecting way—from promoting strong security standards to investing in rights-respecting capacity building for law enforcement.

In other news
  • Over at the First Committee, July saw the release of the GGE’s final report, along with a compendium of state views on the application of international law in cyberspace (an advance version was released in May without this list). This should serve as an important resource both for fostering a collective understanding of how international law applies in cyberspace, and holding states to account for their actions. 
  • As part of the Global Encryption Coalition, we signed onto a joint letter calling on Apple to abandon its (now postponed) plans to build surveillance capabilities into iPhones, iPads and other Apple products, which we discuss above in more detail.

Human rights or ethics? The key faultline in AI discussions

As discussions around the governance of artificial intelligence (AI) progress in a range of forums and bodies—from UNESCO to the European Commission—one key question remains contentious: what is the appropriate framework for identifying and addressing AI-related impacts? 

Many argue strongly that the international human rights framework—with its universal application and a decades-long history of interpretation and implementation—is the most appropriate. Others posit that alternative frameworks, often termed “ethical” or “responsible”, should be developed to respond to the particular nature and impacts of AI.

The relationship between ethics and human rights in the field of AI governance is complex, and understanding the motivations of the different perspectives is critical. Some proponents of ethics suggest that human rights alone are insufficient to address the risks that AI creates, and seek to build upon international human rights standards. Issues such as the transparency of AI-based and algorithmic decisionmaking, or environmental impacts, are cited as examples of societal impacts that the human rights framework does not speak to. Yet others—particularly in the corporate sector—may simply be less aware of the human rights framework and the strengths it offers as a means of governing AI. Worse still, for some ethics provides a means to exclude human rights from the discussion entirely, enabling them to shirk their human rights obligations or discredit the legitimacy of the human rights movement.

In the last year, we’ve seen the tension between ethics and human rights arise in both multilateral spaces and national policymaking processes. Fortunately, many governments increasingly assert the need for any governance frameworks to be grounded in human rights. The language of the Freedom Online Coalition’s Joint Statement on Artificial Intelligence and Human Rights is rooted in the requirements of international human rights law. And, during negotiations on UNESCO’s Recommendation on the Ethics of Artificial Intelligence (see our analysis here), many states—particularly Canada, Germany and Austria, were active in ensuring the final text clarified that AI-related ethics must build upon and supplement human rights, rather than being a substitute for it. Similarly, the recently adopted Resolution on new and emerging technologies at the Human Rights Council explicitly highlighted “the importance of a human rights-based approach to new and emerging digital technologies”, language which ultimately led to China (joined by Eritrea and Venezuela) abstaining from the Resolution. Recent commitments in Ireland’s National AI Strategy and Australia’s International Cyber and Critical Technology Engagement Strategy to promote a human rights-based approach are also promising in this context.

There are, however, many other global forums and processes, such as the ITU, where this is not the case. Many multistakeholder or corporate initiatives are also unlikely to have human rights expertise, and most national AI strategies either prioritise ethics or responsible AI over human rights, or fail to address the risks associated with AI at all. This reinforces the need for those committed to human rights—whether in government, the private sector, civil society or elsewhere—to follow and engage in these processes, and to make the case for putting human rights at the heart of new governance frameworks. 

For more resources around AI and human rights—including a guide to forums and an interactive calendar— see our AI Policy Hub.

Listening post

Your monthly global update, tracking relevant laws and policies relating to the digital environment.

On the emerging tech side:

On online content:
  • Canada’s government has unveiled a new proposed Bill to tackle “online harms”, due to be introduced in autumn 2021. The focus is on five types of content which are already illegal, and the law won’t apply to private or encrypted channels.
  • Turkey has introduced new legislation requiring social media providers to remove a range of content which threaten security, disrupt public order, praise crime and criminals, or attack the personalities of others, within 24 hours.
  • Kyrgyzstan’s parliament has passed the Law on Protection from False and Inaccurate Information, which enables the government to remove or block content that contains “false or inaccurate information” without a court order. 
On the trust and security side, a few updates on laws relating to cybersecurity and cybercrime: 
  • Zimbabwe’s Cybersecurity and Data Protection Bill has been recommitted to the Senate.  
  • Cuba published a series of decrees on telecommunications, including the internet and radio, and also a resolution on cybersecurity.
  • Barbados is working on cybercrime legislation in line with the Budapest Convention. 
  • The government of Papua New Guinea is also holding an online consultation on its draft National Cyber Security Policy. 
Copyright © 2019 Global Partners Digital.
All rights reserved

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Global Partners Digital · Second Home · 68 Hanbury St · London, E1 5JL · United Kingdom

Email Marketing Powered by Mailchimp