Copy
Digital Deception Decoder 
December 21, 2019

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Note: This is the last Decoder for 2019 — a little longer and a little later than usual because there’s a lot of year-end news to chew on. We’ll see you in the new year! 
  • Our take: In Blavity, MapLight’s Ann Ravel talks about the necessity of addressing online voter suppression to protect communities of color. And for CNN, Ravel points out that the need for government regulation is getting lost in the debates over Facebook, Twitter, and Google’s various political ad policy changes.
  • Fakebook: Tony Romm reports for the Washington Post that Facebook has announced new rules barring ads that spread disinformation about the U.S. Census — including ads by politicians. As Campaign Legal Center’s Adav Noi sharply summed up, “Facebook extends its uniform policy of being unwilling to monitor political content except when it does and unable to detect lies except when it can.” Per TechCrunch’s Josh Constine, Facebook’s policy of not fact checking politicians will also apply on Instagram. And BuzzFeed’s Ryan Broderick reports that California gubernatorial candidate Adriel Hampton continues to test the policy, running a Facebook ad with edited video purporting to show Sen. Mitch McConnell supporting impeachment of the president. Meanwhile, a Wall Street Journal exclusive from Emily Glazer, Deepa Seetharaman, and Jeff Horwitz suggests that Facebook board member and libertarian entrepreneur Peter Thiel is pushing the company to stick to its controversial policy.
  • In potential foreshadowing for the U.S. election, Forbes contributor Simon Chandler recaps research from First Draft News, which found that 88% of political ads posted by the UK’s Conservative Party in early December were misleading. In The New York Times, Adam Satariano and Amie Tsang discuss the role played by domestic and foreign disinformation in the UK election.
  • A new study from researchers at Northeastern, USC, and the nonprofit Upturn investigates Facebook’s ad delivery algorithm for political advertising, finding that the algorithm sets the price of ads based on alignment of ad content with audience’s inferred political preferences — reinforcing political polarization. And ProPublica’s Ava Kofman and Ariana Tobin describe related research suggesting that, despite its civil rights settlement, Facebook’s new system for housing and employment ads can still discriminate against specific groups in a variety of ways.
  • In Snopes, Jordan Liles and Alex Kasprak report that, following months of inquiries from Snopes, Facebook has finally removed a pro-Trump media network called TheBL — connected to the now-infamous Epoch Times — that engaged in coordinated inauthentic efforts. On Lead Stories, Sarah Thompson breaks down how the network used AI-generated synthetic faces to populate its fake accounts. 
  • Deepfake effects: For Ars Technica, Timothy B. Lee discusses his attempt to create a deepfake video replacing Mark Zuckerberg with Star Trek character Data, offering a technical explanation of how AI autoencoders are trained that reveals how costly and limited they currently are. (Lee does, however, underestimate the present and potential harms of such deepfakes for the harassment of women and the undermining of political trust.) In Technology Science, Max Weiss describes an experiment he conducted testing whether “deepfake” (AI-generated) text can be used to game public comment systems (answer: yes). And in Politico, Janosch Delcker explores how deepfakes foment uncertainty about reality, discussing effects such as the “liar’s dividend” described by legal scholar Danielle Keats Citron.
  • Deepfake responses: Technologist Aviv Ovadya argues in the MIT Technology Review that it is necessary to  prevent the harms that deepfakes portend while encouraging the development of “ethical deepfake tools,” suggesting a variety of techniques that developers can use to limit who uses their tools and deter bad actors. Others have focused more on the detection of deepfakes, using techniques such as “soft biometrics” to determine whether videos of public figures have been manipulated, as discussed by David Ingram and Jacob Ward for NBC News, or launched initiatives such as Facebook’s Deepfake Detection Challenge, summarized by Eliza Strickland for IEEE Spectrum. Meanwhile, a new report from WITNESS explores the dilemmas of authenticating media via “verified-at-capture” technologies.
  • Local news? This week, the EU Disinfo Lab exposed a global network of over 265 fake local news sites that is managed by an Indian network seeking to spread pro-India and anti-Pakistan talking points. Meanwhile, in the U.S., the Tow Center’s Priyanjana Bengani describes a similar network of at least 450 sites in the U.S. posing as local news outlets to spread conservative messages. Bengani explains the apparent strategy: “Websites and networks can aid campaigns to manipulate public opinion by exploiting faith in local media. The demise of local journalism in many areas creates an information vacuum, and raises the chance of success for these influence campaigns.”
  • Secondary Infektion: New research from Graphika suggests that the circulation of a “do-not-prosecute list” supposedly generated by Marie Yovanovitch, American ambassador to Ukraine, is linked to the Russian “Secondary Infektion” operation, writes Isaac Stanley-Becker for the Washington Post. Meanwhile, two weeks ago, Reddit reported that it had taken down accounts connected to the Russian operation that posted leaked documents in advance of the UK election. And for BBC News, Jamie Ryan and Mike Wendling explore the Russian group’s use of the BuzzFeed community section and Quora to promote disinformation.
  • Earlier this month, Twitter CEO Jack Dorsey announced plans to create a decentralized protocol for social media, described by Katie Paul and Munsif Vengattil for Reuters. In Reason, Techdirt founder Mike Masnick, who appears to be an inspiration for this plan, explains how this move might (key word) improve competition, privacy, and content moderation. Meanwhile, communications scholar Jennifer M. Grygiel critiques the plan for NBC: “Without effective regulation, and sound ethics and corporate responsibility, new protocols and proposals are destined to benefit companies and would likely become a jungle by another name.” And in The New York Times, Nathaniel Popper illuminates the skepticism about Twitter’s announcement from proponents of other decentralization projects.
  • In an essay for SSRC’s Mediawell project, political communication scholar David Karpf argues that while “strategic intent is not strategic impact” when it comes to the effectiveness of disinformation campaigns at manipulating votes, such efforts present a real danger to democratic norms by allowing politicians to discard the “myth of the attentive public.” 
We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

 

Share Share
Tweet Tweet
Forward Forward
Brought to you by:
 
MapLight     
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list