Digital Deception Decoder 
February 1, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
  • Place your trust in the crowd.  That’s one possible takeaway for social media companies when it comes to ranking news content based on a new study that points to crowdsourcing as a valuable fact-checking method.  The study, conducted by Gordon Pennycook and David Rand and published in the Proceedings of the National Academy of Sciences, found Democrats were more likely to trust mainstream news than Republicans -- but regardless of party, fake news websites and hyperpartisan sites scored low on credibility.  If you don’t have time to read the full study, NiemanLab has a helpful summary here.
  • Credit, sure. Consequences? No thanks. President Trump spoke with New York Times publisher A.G. Sulzberger in an interview for The Daily podcast on Friday and recounted here by the NYT.  While the President was happy to claim responsibility for helping spread the term “fake news,” he was more skeptical of the connections between his rhetoric the uptick in violence against journalists worldwide.
  • What can computer engineers do to be better allies in the fight against misinformation online?  Brendan Nyhan and Patrick Ball explore that question and outline advice for technologists in an essay for Defusing Disinfo.  They argue the answer starts with technologists leveraging their talent to fight for positive change, not just looking for the exits when their company’s reputations take a hit.
  • The European Commission "sounds less than impressed” with the progress of tech giants Facebook, Google, Twitter, and Mozilla in addressing the spread of misinformation online, according to Natasha Lomas of TechCrunch. The companies submitted their first monthly progress reports to the commission this week as part of a process designed to reduce false information online in the run-up to the 2019 European Parliament elections in May.
  • Following the news last week of major layoffs at Buzzfeed, Huffington Post, and Gannett, Margaret Sullivan of the Washington Post offers a thoughtful commentary on how news consumers can best respond when their local newspaper takes a hit.  Her advice: keep your subscription to help support the reporters still on the job rather than cancel to send a message.  Meanwhile, Jeremy Littau in Slate argues that irresponsible behavior from newspaper conglomerates over dozens of years led to the current journalism crisis.  As Littau explains, “Decades of sparse investment and enormous debt service left media companies exposed and hamstrung at a time when investment was needed.” 
  • Ahead of national elections in India this April and May, significant concerns are emerging about whether there are enough safeguards in place to stop the spread of misinformation online.  Billy Perrigo writes in TIME that volunteers for the ruling party are utilizing the group chat feature on WhatsApp to spread political messages -- and those messages frequently include “false information and hateful rhetoric.”  Writing for The Next Web, Ivan Mehta documents how a host of other apps and platforms are also adding to India’s fake news problem.  And in a post for the Council on Foreign Relations, Conor Sanchez takes a wider look at the threats posed by misinformation online to democracy in the developing world.
We want to hear from you! If you have suggestions for items to include in this newsletter, email them to - Hamsini Sridharan
Brought to you by:
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list