Digital Deception Decoder 
November 2, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
  • In the wake of the Pittsburgh synagogue shooting, there’s a growing recognition of the rise in anti-Semitic speech online -- and the terrifying consequences.  Earlier this month, research from the Digital Intelligence Lab for the Anti-Defamation League examined anti-Semitic hate speech on social media, looking at more than 7.5 million tweets during a two and a half week span and concluding that bots were responsible for roughly one third of the accounts repeatedly spreading derogatory terms about Jews.  And in the days immediately following the shooting, the New York Times found nearly 12,000 Instagram posts with the hashtag “#jewsdid911.”  In Quartz, Max de Haldevang looks at the steps to radicalization online: anger; blame; self-marginalization; evangelizing your obsession; and dehumanization.
  • “It's good to be in the business of anger on the internet.” That’s a key takeaway from a recent edition of The New York Times podcast, The Daily.  Reporter Kevin Roose spent time with the husband and wife co-founders of Mad World News, which built a massive conservative audience on Facebook by posting divisive and hyper-partisan content.  Roose reports that the couple occasionally raked in monthly revenues over $100,000 before Facebook altered its algorithm. On a related note, new research on the 2018 midterms from the Computational Propaganda Project finds social media users are “sharing more junk news than professional news overall.”
  • Bot or not? When it comes to online discussion about the Central American migrants making their way to the United States, or “the caravan,” more than half the discussion on Twitter has been driven by bots, according to research by Robhat Labs documented by Issie Lapowski in Wired. And in the aftermath of the Pittsburgh synagogue shooting, bots were driving about a quarter of the conversation. Robhat Labs’ new tool,, sounds like a promising resource for journalists -- allowing for analysis of bot activity on specific topics as news events occur.
  • An investigation from Vice News finds Facebook’s new “Paid for by” feature on political advertisements is far from a panacea for misleading advertising.  The team at Vice applied to buy ads on behalf of 100 senators -- and every ad was approved. As William Turton at Vice explains, “these tests show that compliance with the feature is entirely voluntary, meaning a tool that Facebook introduced to increase trust in advertising can also be used as a vector for misinformation.”  Meanwhile, the editorial board at the Washington Post argues Facebook needs to do more to accurately identify political advertisers.

  • Loopholes for foreign money: MapLight published an infographic outlining the existing avenues for foreign actors to influence U.S. politics and the ways we can close them.  Currently, foreign individuals, governments, and companies have their pick of a range of options for spending to influence U.S.  politics, including contributing to dark money organizations; making contributions from companies with foreign ownership; contributing from domestic subsidiaries of foreign companies, or simply giving money directly to campaigns online with credit cards. On a related note, Eric Geller reports in Politico that efforts from the White House to counter foreign intervention in our elections are virtually non-existent.
Brought to you by:
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list