Digital Deception Decoder 
November 9, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • ‘Twas the night before the election, and Facebook announced the removal of 115 Facebook and Instagram accounts that were engaged in coordinated, manipulative activity—accounts that were later linked to the Internet Research Agency. Twitter revealed that in the weeks leading up to the election, it deleted over 10,000 automated accounts spreading messages aimed at voter suppression, according to this Reuters exclusive. Meanwhile, at NBC News, Ben Collins got some accidental insight into the trolls jockeying to stay ahead of platforms’ enforcement policies.
  • The new normal: The good news is, there wasn’t a catastrophic digital crisis on Election Day (that we know of)—but this doesn’t mean we’ve solved anything. In the Washington Post, Craig Timberg and Tony Romm explore the idea that most election disinformation has been domestic in origin, and therefore much harder to address. In Slate, Will Oremus documents the platforms’ efforts to thwart crises—and their limitations. Charlie Warzel says it all in BuzzFeed: “years of algorithmically powered information warfare have drastically rewired our political discourse, turning it ever more toxic and blurring the lines of reality.”
  • Facebook midterms: Three days before the election, Jonathan Albright dug into Facebook and found ample weaknesses in the platform’s transparency efforts, including a lack of recursive accountability when it comes to verifying Page managers and labeling their political ad campaigns, a turn to shadow organizing via private Groups, and failures in enforcement of their own policies. Meanwhile, for the New York Times, Kevin Roose argues that Facebook is still behind the curve when it comes to dealing with disinformation and heavily reliant on journalists and researchers to detect threats. And at The Intercept, Sam Biddle reports that Facebook’s supposedly cleaned-up ad targeting system offered “white genocide conspiracy theory” as a targeting criterion as late as last week.
  • In HuffPost, Rowaida Abdelaziz examines how Muslim candidates in the midterms were targeted with Islamophobic disinformation. Abdelaziz points to new research from the Information Disorder Lab at the Harvard Kennedy School, which identified five key tactics used in attempts to digitally discredit American Muslim candidates and their allies. Chief among them: falsely linking them to terrorism and extremism.
  • Deplatformed? While Facebook and Twitter were countering voter suppression efforts on their platforms, it’s worth noting that Gab is back online, Alex Jones is still reaching audiences on Facebook, and LinkedIn, of all places, is now seeing more political disinformation as trolls seek out new ways to use social media.
  • In this context, it’s important to think about what meaningful transparency would look like. In Penn Law’s Regulatory Review, Stefanie Ramirez discusses a recent paper by MapLight’s Ann Ravel and USC’s Abby Wood that outlines three changes needed to illuminate digital political advertising: requiring platforms to maintain a useful repository of political advertising, closing loopholes around on-ad disclaimers, and tackling dark money. (Full paper here.)
We want to hear from you! If you have suggestions for items to include in this newsletter, please email - Hamsini Sridharan
Brought to you by:
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list