Copy
Digital Deception Decoder 
January 14, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • It’s a pivotal election year in a new decade—and Facebook has doubled down on its policy to not fact check ads by politicians, report Tony Romm, Isaac Stanley-Becker, and Craig Timberg for the Washington Post. Instead, the company announced minor tweaks: adding more information on ad impressions to its ad library and allowing users to opt out of seeing political ads. For the Associated Press, MapLight’s Daniel Newman recaps the stakes of Facebook’s inaction for our democracy. In addition, this week, the Columbia Journalism Review’s Matthew Ingram is speaking with other experts about the policy, including Tatenda Musapatike of Acronym and Harvard Law’s Evelyn Douek
  • A leaked memo from executive Andrew Bosworth sheds light on thinking inside Facebook, as summarized by Kevin Roose, Sheera Frenkel, and Mike Isaac at The New York Times. In a winding post, Bosworth covers what he sees as Facebook’s role in electing Trump (which he attributes to the campaign’s digital advertising, rather than Cambridge Analytica or Russian interference) and the platform’s addictive tendencies (which he attributes to individual choice rather than design). Meanwhile, in a case of sponcon gone wrong, Facebook placed an article in Teen Vogue that describes its efforts around election protection and disinformation in glowing terms—but the article wasn’t originally labelled as sponsored content and was rapidly taken down. The Washington Post’s Reis Thebault describes the bizarre affair.
  • Other platforms: In AdAge, George P. Slefo covers Spotify’s announcement that it will suspend political ads in 2020. And the Wall Street Journal’s Georgia Wells and Emily Glazer discuss how campaigns are ramping up their use of TikTok, despite the platform’s ban on paid political ads and supposedly apolitical brand. TikTok also recently expanded its content moderation rules, including prohibiting misinformation around elections and civic processes, writes Sara Fischer in Axios.
  • Deepfakes and cheapfakes: A congressional hearing last week—presumably the first of many in 2020—focused on election disinformation, and in particular, the threat of deepfakes, per CNET’s Laura Hautala. In advance of the hearing, Facebook announced a policy banning deepfakes (including in advertising by politicians), though as the Post’s Romm, Stanley-Becker, and Drew Harwell point out, this likely wouldn’t cover “cheapfakes” such as an edited video of Nancy Pelosi that circulated last year. At the Times, Nick Corasaniti explores how a similarly misleadingly edited video of Joe Biden spread rapidly online on New Year’s Day. Meanwhile, Jay Peters writes for The Verge that Reddit has instituted a policy banning deceptive impersonation of individuals and entities, including via deepfakes. 
  • At BuzzFeed, Craig Silverman, Jane Lytvynenko, and William Kung delve into the rise of a global market of PR firms selling disinformation to the highest bidder—what Facebook cybersecurity head of policy Nathaniel Gleicher describes as “the professionalization of deception.”
  • Tensions with Iran escalated rapidly in the new year due to the U.S. killing of Iranian general Qasem Soleimani; with those tensions came concerns about potential disinformation campaigns. For Axios, Fischer discusses the Iranian disinformation playbook. 
  • A group of experts convened by John Borthwick and Chris Hughes—including former tech company employees, researchers, and policy professionals—has released a platform of “Ten things technology platforms can do to safeguard the 2020 U.S. election.” They call for companies to increase transparency for political advertising, improve ad archives, and use consistent definitions, among other seemingly straightforward recommendations. 
  • UNC’s Center for Information, Technology, and Public Life has launched a useful set of resources on “Digital Politics.”  Researchers working with the center have published a report on “Digital Political Ethics,” which draws on conversations with political campaign practitioners and platform employees to inform recommendations regarding four broad principles: prioritizing democratic participation; protecting election integrity; increasing transparency; and ensuring fairness and consistency. The website also offers a handy guide to major platforms’ political advertising policies and targeting capabilities.
  • For the National Endowment for Democracy, researchers Samuel Woolley (UT Austin) and Katie Joseff (Institute for the Future) explore the “demand side” of disinformation—the passive and active psychological biases that make people susceptible to believing and spreading false information, and their manifestations across different cultural contexts—and discuss the use of fact-checking and media literacy to counteract such problems.
We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

 

Share Share
Tweet Tweet
Forward Forward
Brought to you by:
 
MapLight     
Copyright © 2020 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list