Copy
Digital Deception Decoder 
September 7, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Political theater: On Wednesday, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey testified before the Senate Intelligence Committee regarding efforts to combat foreign interference (Google was notably absent). Bloomberg recaps the wide-ranging discussion, and in this IEEE Spectrum interview, Oxford Internet Institute Computational Propaganda Project researcher Samantha Bradshaw offers important context for senators’ questions. Later in the day, Dorsey appeared before the House Energy and Commerce Committee to respond to (unfounded) allegations of bias against conservatives that have increasingly gained political currency on the right.
  • The day before the hearings, Facebook CEO Mark Zuckerberg published an op-ed in the Washington Post describing Facebook’s election security efforts and managing expectations in advance of the midterm elections. Meanwhile, Bloomberg’s Sarah Frier and Alyza Sebenius point out the lack of effective coordination within and between technology companies and government agencies: “With less than 10 weeks until the U.S. midterms, this game of political hot potato— the passing of responsibility between companies and the government— is leading to the uncomfortable conclusion that nobody has the full picture.”
  • Civil society groups continue to probe political advertising on Facebook and Google. Here, BuzzFeed’s Charlie Warzel discusses a recent stunt by the Campaign for Accountability, which posed as Russians—and successfully bought political issue ads on Google. And here, ProPublica shares some interesting findings from their Political Ad Collector, including Uber’s targeting of Black Lives Matters supporters, the Supreme Court nomination battle, and more.
  • From the fringe: In the Washington Post, Craig Timberg and Drew Harwell discuss a study from the Network Contagion Research Institute that illuminates the scale of antisemitic and white nationalist content on platforms such as 4Chan and Gab following major political events—and the spread of that content to more mainstream sites. (The paper can be found here.) Meanwhile, in Slate, April Glaser examines how bots were instrumental to spreading the QAnon conspiracy theory from 4Chan into more mainstream public consciousness. And Kevin Roose at The New York Times explores how fringe figures and theories are finding a home in private Facebook groups.
  • Writing for Inside Science, Yuen Yiu looks at some of the technological and policy-based approaches to combating computational propaganda, from using AI to identify bots to regulation, content moderation, and media literacy, describing the “cat-and-mouse game” of trying to get ahead of deceptive digital campaigns. And at The New York Times, Keith Collins and Sheera Frenkel have this disorienting quiz asking readers to differentiate legitimate Facebook posts from deceptive digital campaigns. (It’s not easy.)
  • In BuzzFeed News, Davey Alba dives into the destabilizing role of Facebook in the Philippines—and its connections to the U.S.: “If you want to know what happens to a country that has opened itself entirely to Facebook, look to the Philippines. What happened there — what continues to happen there — is both an origin story for the weaponization of social media and a peek at its dystopian future.” One chilling snippet: Facebook embedded employees in Rodrigo Duterte’s 2016 campaign, just as they did with U.S. candidates.
  • Finally, check out this newly updated resource from researchers at Trinity University. Propaganda Critic is a media literacy website devoted to teaching the public about deceptive technologies. It combines research from the fields of communication, history, and psychology to offer examples of propaganda, tips for analyzing messages, and explanations of underlying cognitive biases, including information about computational propaganda.

We want to hear from you! If you have suggestions for items to include in this newsletter, email them to hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list