Copy
Digital Deception Decoder 
November 30, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Dictionary.com’s word of the year is “misinformation”—including and distinguished from “disinformation”—with a nice glossary of related terms.
  • In another defining moment (sorry) for Facebook, outgoing Head of Communications and Policy Elliot Schrage took responsibility for hiring Definers Public Affairs, in a blog post conveniently published the day before Thanksgiving. The post includes a statement from COO Sheryl Sandberg acknowledging that she had received emails about Definers, which she’d previously denied knowing about (in fact, she may have commissioned the research into George Soros herself). The New York Times’ Jack Nicas has this piece on Definers’ tactics, while Kara Swisher asks pointed questions about why we’re focusing on Sandberg and letting Zuckerberg off the hook.
  • Meanwhile, organizations that were the target of Facebook’s “opposition research” are speaking out. Emily Stewart reports for Vox that this week, representatives from Color of Change met with Facebook executives, including Sandberg, to demand accountability. Facebook has acquiesced to their call to release its progress on the civil rights audit it announced earlier this year. For more on the audit, check out Tonya Riley’s story in Mother Jones.
  • In this thought-provoking essay for Ribbonfarm, Renee DiResta argues that we are in the midst of an information war and that we risk missing the big picture by focusing on individual skirmishes. She argues that many solutions being considered by lawmakers are already out of date: “Focusing on feature-level tactical fixes that simply shift the boundaries of what’s permissible on one platform is like building a digital Maginot line; it’s a wasted effort, a reactive response to tactics from the last war.”
  • A study published in Nature Communications illuminates the role of Twitter bots in spreading “low-credibility content.” Looking at 14 million messages tweeted from 2016 to 2017, Shao et al. found that just 6% of accounts were classified as bots—but these accounts were responsible for spreading 31% of tweets linking to suspicious content. The authors conclude that social bots are effective at social media manipulation, but can potentially be curbed by better machine learning-based bot detection systems or even the use of CAPTCHAs.
  • In Motherboard, Henry Farrell and Bruce Schneier discuss their new paper arguing that instead of using a national security framework to understand threats to confidence in democracies, we should turn to computer security. They find that “flooding” and “confidence” attacks from American leaders may do more to destabilize trust in institutions than foreign interference. (Full paper here.)
  • Check out this report from Campaign Legal Center examining the ways that political actors dodged disclosure for digital ads in the 2018 midterms by exploiting loopholes in disclosure regimes (both the government’s and platforms’ transparency initiatives).
We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

 

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list