Copy
Digital Deception Decoder 
June 22, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Last call to register for our Bay Area event with MapLight’s Ann Ravel and the DigIntel Lab’s Katie Joseff on Thursday, June 27th. Please join us if you can, and forward to anyone who might be interested.
  • After months of speculation, this week Facebook announced its new cryptocurrency, Libra, which is being marketed as, among other things, a service for the underbanked. In WIRED, Steven Levy and Gregory Barber describe the plan for Libra, including the blueprints for a “neutral association”—composed of 100 $10 million-investors—to govern it. Language mimicking democratic governance is swirling around the Libra Association, but as Matt Stoller points out in The New York Times, Libra appears to be attempting to bypass the protections of actual government, which could lead to a range of problems. Implications for digital deception TBD.
  • Pivot to privacy? Sam Biddle at The Intercept discusses an interesting legal strategy that Facebook is deploying in a California district case about the Cambridge Analytica scandal: arguing that users have no expectation of privacy on the social media site. Meanwhile, technologist Maciej Ceglowski makes the case for a broader definition of privacy—“ambient privacy”—that requires government protection. Ceglowski asks, “To what extent is living in a surveillance-saturated world compatible with pluralism and democracy?”
  • AI policy expert Mutale Nkondo lays out the importance of regulating deepfakes, highlighting the ways that digital technologies become weapons of misogyny and racism. Nkondo helped develop the Deepfakes Accountability Act introduced last week; at TechCrunch, Devin Coldewey examines the provisions of the bill in greater depth, arguing that it establishes an important legal standard for charging bad actors. On a related note, law scholar Danielle Keats Citron’s testimony at last week’s Congressional hearing offers important insight into the harms to individuals, democratic systems, and national security that can be caused by deepfakes, as well as a legal framework for addressing those harms without infringing on First Amendment protections.
  • Researchers at UC Berkeley teamed up with Adobe to develop deep learning software to detect deepfake photos using detailed facial analysis (full paper here). While this software yields 99% accuracy in identifying fakes generated by the Photoshop Face Aware Liquefy feature, researchers warn the threat will continue to intensify as deepfake techniques advance. In a CBS News report, UC Berkeley professor Hany Farid describes developments in deepfake detection—including his new software program for comparing real videos to potentially altered ones—as a “drop in the bucket,” needed alongside improved reporting, media literacy, and better company and government policy.
  • A recent European Commission report reveals that Russian disinformation efforts were involved in last month’s EU elections, reports Colin Lecher for The Verge. Despite this continued interference, in a recent Atlantic Council report, Alina Polyakova (Brookings Institution) and Ambassador Daniel Fried (Atlantic Council) detail how the EU has taken more measures to combat the issue of disinformation than the US, offering recommendations for both governments—as well as technology companies and civil society—to continue building resilience against information warfare.
  • Digital ads: This week, the FEC resumed discussion of rules for campaign finance disclaimers on online political advertising, reports Sara Swann for The Fulcrum. Meanwhile, in TechCrunch, Natasha Lomas digs into a new report by the UK Information Commissioner’s Office, which found that behavioral advertising is based on illegal profiling of users. And DuckDuckGo CEO Gabriel Weinberg has scathing words for Big AdTech’s reliance on behavioral advertising in The New York Times, arguing in favor of strong privacy laws such as California’s.
  • Related: Political communication scholars Daniel Kreiss and Shannon C. Mcgregor have a new study out examining how Facebook and Google make decisions about what paid political advertising to allow on their sites, and how those often-opaque decisions affect political practitioners and the public.
  • On Tuesday, the World Federation of Advertisement announced the creation of the Global Alliance of Responsible Media to curb disinformation and encourage online safety, with members including Facebook, Google, and Twitter as well as General Mills, Mastercard, and others. This group is intended to form cross-brand and cross-platform guidelines for tackling online content issues. Axios’ Sara Fischer offers context for this coalition in light of previous self-regulatory efforts by the industry.

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list