The European Commission's Margrethe Vestager oversees digital policy for the EU. (Reuters)
Europe tackles AI-manipulated content
The EU Commission unveiled on Wednesday a stunningly wide-ranging proposal for regulating artificial intelligence. The draft rules were hailed as the first of their kind, cementing the bloc’s aspiration to be the global rule-maker for technology following GDPR in 2016.
In its sights were the predictable candidates: self-driving cars, job recruitment algorithms and facial recognition. But less predictable was how the EU would approach AI-manipulated content.
Its sole intervention around that topic was a proposed obligation to “disclose that the content is generated through automated means.” Exemptions for “legitimate purposes,” such as law enforcement and freedom of expression, were mentioned in the proposal. It doesn’t specify what form disclosure should take, though possibilities include a textual, audio or visual warning.
This is a major step toward what has become known as “authenticity infrastructure.” Besides the proposal’s ambiguity — what qualifies as exempt on the grounds of freedom of expression, who is responsible for disclosure and what degree of AI manipulation makes disclosure necessary — there are some important points to address.
First, the EU’s desired outcomes are “informed choices or to step back from a given situation.” This shows an understanding of the need for pause to reduce sharing of harmful content. But early research casts doubt on whether warnings on AI-generated content do much at all.
Second, these desired outcomes indicate a concern with “deepfakes,” which were mentioned four times in the report, based on their potential to mislead. But the abuse of women via deepfaked porn, which wasn’t mentioned, is the more clear and present danger.
There are also risks and unintended consequences with labeling AI-manipulated content. What happens when labels are erroneously applied? What happens when they are faked? And will they make unlabeled media seem more trustworthy?
In the coming weeks, First Draft will be publishing research on AI content labeling and its risks, in collaboration with the Partnership on AI nonprofit coalition. — Tommy Shane