Copy
Digital Deception Decoder 
July 26, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
 
*****
We depend on your support to publish this newsletter. Since we launched, it’s been 1.5 years of covering everything you need to know about digital deception in our political system. Please donate today by contacting Hamsini Sridharan (hamsini@maplight.org) or visiting the MapLight website. 
*****

  • Announcement: Starting August 5, we’re moving to Mondays! 
  • Warnings: In Vox, Eric Johnson recaps the latest episode of Recode Decode with Kara Swisher, in which Rep. Adam Schiff (D-Calif.) discusses the likelihood of digital deception in the 2020 election. The Chairman of the House Intelligence Committee, Schiff warns that both the US government and technology companies are unprepared to combat disinformation. And for USA Today, former CIA analyst Cindy L. Otis identifies six disinformation trends to watch for in the election. 
  • Facebook’s FTC settlement over its privacy violations was formally announced this week; Commissioner Rebecca Kelly’s Slaughter’s dissent on the decision discusses the ways this outcome was largely a gift to the company. Just as the settlement was finalized, it was confirmed that the company is a subject of a formal antitrust investigation by the FTC, detail Mike Issac and Natasha Singer for The New York Times. This comes as the Justice Department announced an antitrust review of big tech companies earlier this week. In the MIT Technology Review, Martin Giles lays out the implications of these investigations. 
  • Research and funding: This week, the John S. and James L. Knight Foundation announced a $50 million investment in research related to technology’s role in democracy. As a result of an open request for proposals, funding will be distributed among 11 universities, 5 of which will be receiving new centers of study. Meanwhile, Facebook’s ex-security chief Alex Stamos is working to build the “Stanford Internet Observatory,” a data clearinghouse for studying internet abuse across platforms, reports Andy Greenberg in Wired. 
  • Transparency? The New York Times’ Matthew Rosenberg reveals that political ad data from Facebook’s Ad Library has been rendered useless by bugs and other technical issues according to researchers from Mozilla and the French government. The Mozilla team reported the library’s flaws, but were met with inaction on Facebook’s part. (Want to learn more about the platforms’ political ad centers and their weaknesses? Check out MapLight’s guide.)
  • Libra: Following Facebook’s Libra announcement, a dozen fake Facebook and Instagram accounts were found to have misrepresented themselves as official pages for the currency, report Drew Harwell, Tony Romm, and Cat Zakrzewski for the Washington Post. In response to this development, digital currency expert Svitlana Volkova of the Pacific Northwest National Laboratory says  “Of course cryptocurrency is a very fruitful topic for adversaries to spread disinformation. It’s very dynamically changing like our political environment.” 
  • On Tuesday, the New York Times unveiled the “News Provenance Project,” an initiative aimed at curbing disinformation online by making journalism more transparent to the public, writes project lead Sasha Koren. In collaboration with IBM, the Times is experimenting with publishing photos using blockchain so that users can see the source. In the MIT Technology Review, Mike Orcutt provides a brief overview of how this project came to be. 
  • Since California’s “Bot Law” went into effect earlier this month, questions have been raised about its effectiveness at curbing disinformation. In Wired, Renee DiResta argues, “The law's original intent—to shine a spotlight on malign bot activity and to increase public awareness—is noble. But in effect, the law has three major flaws: ambiguity, limited platform responsibility, and misguided enforcement.” 
  • On Wednesday, Netflix released a documentary called “The Great Hack” exploring Facebook’s data practices and the Cambridge Analytica scandal. The film serves as a “timely reminder” of disinformation threats ahead of 2020, writes Zak Doffman in Forbes. But in a Twitter thread, communications scholar David Karpf points out that some of the doc’s claims should be taken with a grain of salt: Cambridge Analytica’s claims about the effectiveness of its targeting were likely overstated to market their services. That said, as Karpf concludes, “It seems like usually the debate boils down to ‘Cambridge Analytica is bad and the data markets should be regulated’ versus ‘CA is ineffective and everything is fine.’ There needs to be room for a 3rd option. CA was ineffective. Data markets need to be regulated.”
  • Note: This week’s Decoder brought to you by MapLight intern Abby Luke.

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share Share
Tweet Tweet
Forward Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list