Digital Deception Decoder 

October 12, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Mark Latonero’s new report for Data & Society makes the case for using a human rights frame to govern artificial intelligence. Latonero examines the implications of AI for five areas of international human rights (non-discrimination, equality, political participation, privacy, and freedom of expression) and offers recommendations that can be implemented by various stakeholders, including national governments, intergovernmental organizations, tech companies, civil society, and academia.
  • In this piece from the National Endowment for Democracy, leading experts (including the DigIntel Lab’s Sam Woolley) explore the implications of advances in AI for digital disinformation, including more lifelike (and less detectable) bots, deepfake images and videos, and moreand what can be done about it. Woolley predicts: “Forthcoming campaigns will likely harness our other senses: realistic sounding AI voices represent a new future for push polling and VR will allow for multisensory propaganda experiences.” Meanwhile, at the Brookings Institution, Alina Polyakova explores the contours of Russian disinformation in elections, discusses the AI future, and argues that the U.S. needs to revisit Cold War-era strategies for dealing with information warfare.
  • Relevant to the deepfake future: NiemanLab’s Laura Hazard Owen reports on a new UC Davis study which suggests that people’s assessments of the credibility of fake images rely less on social context cues (such as the credibility of the source) and more on their digital media literacy and photo-editing skills, as well as their previous views about the content of the image.
  • Facebook just announced the removal of more than 800 pages and accounts peddling misleading political content that violated its policies on spam and coordinated inauthentic activity. As Elizabeth Dwoskin and Tony Romm discuss in the Washington Post, unlike other recent high-profile purges leading up to the midterm elections, these networks appear to be domestic and originated from both sides of the political spectrum (although Facebook hasn’t shared the names of most of the groups)—drawing questions about freedom of expression. Raising echoes of Cambridge Analytica, CNET’s Laura Hautala reports that Facebook also disabled the accounts of Russian firm Social Data Hub, which appeared to be selling user data scraped off the platform.
  • Meanwhile, a leaked presentation is shedding light on Google’s internal debate over freedom of expression. Nick Statt at The Verge digs into the slide deck (titled “The Good Censor”), which was leaked to Breitbart News earlier this week
  • Adelson’s app: Ishmael N. Daro’s BuzzFeed article from a few weeks ago offers insight into an app called Act.IL, which organizes volunteers to conduct what are essentially astroturf campaigns to shape online dialogue around the Israel/Palestine conflict. Development of the app was funded by conservative American political mega-donor Sheldon Adelson.
  • In Bellingcat, Robert Evans investigates the ways that digital white supremacy activists became radicalized, or “red-pilled.” The chilling article covers the interconnected influence of memes, 4chan, Discord, YouTube, and Infowars on 75 individuals’ slide into fascism. And Slate’s April Glaser gives us a deep dive into the ways that gaming platform Discord has been used by these groups to organize.
  • Historical perspective: In the Columbia Journalism Review, Anya Schiffrin explores a World War II-era effort to combat propaganda with public media literacy education and its lessons for today’s efforts against digital disinformation. “The lesson of the IPA is not just that media literacy education is hard to do well but that when societies become truly polarized, just teaching tolerance and critical thinking can be controversial.”

We want to hear from you! If you have suggestions for items to include in this newsletter, email them to 

Brought to you by:
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list