Digital Deception Decoder 

October 5, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • For WIRED, New Media Frontier’s Molly McKew traces the evolution of a cadre of right-wing influencers—including Roger Stone, Mike Cernovich, and Jack Posobiec—engaging in domestic “information terrorism” via the spread of online conspiracy theories: “The feedback within this network is a form of radicalization and extremism. [...] it has become a culture of digital violence that is bleeding into real life.” This network’s misogynistic architecture was deployed against Christine Blasey Ford in support of Brett Kavanaugh’s nomination to the Supreme Court. Also check out WIRED’s guide to online conspiracy theories from Emma Grey Ellis.
  • Bot bill: Last week, California Governor Jerry Brown signed a first-of-its-kind bill into law, banning automated accounts (bots) from posing as humans to mislead Californians, reports Gili Malinsky for NBC News. Meanwhile, Twitter has announced an update to its election integrity rules, including new measures against fake, spammy, and automated accounts. For a nuanced consideration of challenges in detecting and regulating bots, see this recent paper from Robert Gorwa and Douglas Guilbeault at the Oxford Internet Institute Computational Propaganda Project.
  • At the federal level, in The New York Times, Kara Swisher examines a new “Internet Bill of Rights” developed by House Democrats, which includes several key privacy protections (but notably doesn’t include a right to know the sources of digital content). And in the Washington Post, ACLU Senior Legislative Counsel Neema Singh Guliani discusses how tech companies’ surprising new support for federal privacy legislation likely betrays an agenda to preempt stronger laws in the states.
  • A new report commissioned by the Knight Foundation deepens our understanding of how false news spread on Twitter around the 2016 election—and how it continues to spread today. Researchers Matthew Hindman (GWU) and Vlad Barash (Graphika) found that 80% of the accounts that spread misinformation in 2016 are still active and that just 10 sites accounted for 65% of misleading news links shared during the election.
  • AI arms race: In the MIT Technology Review, Karen Hao describes new research on the use of machine learning to detect disinformation. A team of researchers from MIT, Qatar Computing Research Institute, and Bulgaria’s Sofia University found that their best model—after testing more than 900 variables—was only accurate 65% of the time, and that much more training data is needed to increase accuracy.
  • Facebook’s nightmares never cease. Last Friday, the platform revealed that it had been hacked, exposing the login info for nearly 50 million users. Mike Isaac and Kate Conger point out in The New York Times that this attack could affect many more, given how many other apps let users login with their Facebook credentials (although there is as yet no evidence that this has happened). Arjun Kharpal writes for CNBC that the EU is considering an investigation of the hack, which, given the GDPR, could result in massive fines.
  • Southeast Asia is one of the most contested fronts in the battle against disinformation. BuzzFeed’s Davey Alba and Charlie Warzel evaluate Facebook’s recent meeting with civil society representatives from Myanmar, the Philippines, and Sri Lanka, which may mark the platform’s transition from blanket global policies to country-specific policies and standards. Meanwhile, Craig Silverman reports from Singapore on a new legislative proposal to force platforms to crack down on fake news—which is raising concerns about press freedom.
  • CDA 230: In Quartz, Heather Timmons and Hanna Kozlowska dig into a  portion of the new North American trade deal which may benefit tech companies. The deal would essentially export Section 230 of the U.S. Communications Decency Act, which immunizes digital platforms from legal liability for third-party content.

We want to hear from you! If you have suggestions for items to include in this newsletter, email them to 

Brought to you by:
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list