Copy
Digital Deception Decoder 
June 29, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Note: We’ll be offline for the July 4th long weekend and back on track the following week. 
  • Early this week, the Washington Post’s Cat Zakrzewski outlined social media platforms’ preparations to counter disinformation surrounding the first Democratic presidential debates. Following the debates, Ben Collins and Ben Popken at NBC News revealed that far-right trolls engaged in a brigading campaign to game online polls in favor of Tulsi Gabbard and Bill de Blasio—and those manipulated polls were reported in The Hill and the Daily Mail. Meanwhile, BuzzFeed’s Craig Silverman and Jane Lytvynenko debunk disinformation about Kamala Harris’s race and citizenship that was spread by political figures and at least one botnet after the debates.
  • Facebook published a report and appendices detailing the extensive global feedback it has gathered on its proposed “Supreme Court” for content moderation. The report declares that Facebook will move forward with “stronger checks and balances, more due process, and increased transparency.” For Slate, legal scholars Kate Klonick and Evelyn Douek discuss the challenges the company faces with this body, noting the similarities between Facebook’s report and the Federalist Papers published to promote ratification of the U.S. Constitution. “Facebook is having a constitutional moment,” they write.
  • Last week, Sen. Josh Hawley (R–Mo.) introduced the “Ending Support for Internet Censorship Act,” which would require social media companies to convince regulators that content moderation is carried out in a “politically neutral” fashion. While conservative lawmakers have frequently accused Big Tech of political bias, Ben Brody of Bloomberg reports that platforms’ past removals of far right accounts have been due to violations of hate speech and harassment policies rather than partisan censorship. On a related note, the Washington Post reports that Reddit has quarantined popular pro-Trump forum r_TheDonald for repeated violation of policies against inciting violence, while Twitter announced that it will begin labeling tweets from prominent national political figures that violate its rules.  
  • Data protections? In Axios, Kim Hart describes another bill introduced by Hawley this week with Sen. Mark Warner (D-Va.), which would require major platforms to disclose the value of individual users’ data. At Bloomberg, Brody reports that Sen. John Thune (R-S.D.) is planning to introduce a bill that would enable social media users to opt out of having their data used by platforms’ algorithms. But in WIRED, data scientist Hamdan Azhar offers a less-than-flattering counterpoint to the rash of privacy rhetoric in Congress, noting that 81 sitting senators, 176 representatives, and nearly every 2020 presidential candidate have Facebook tracking pixels embedded on their campaign websites.
  • In the states: While several tech-related bills languish in Congress, states have been far more proactive. In a new post for MapLigh, Margaret Sessa-Hawkins and I review some of the most relevant online transparency and privacy laws that have passed at the state level.
  • A recent study by Samuel Woolley (German Marshall Fund), Roya Pakzad (Stanford), and Nick Monaco (Graphika) analyzed Islamophobic disinformation on the self-billed “free speech social network” Gab, including messages about Muslim candidates in the 2018 midterms. Their research revealed that four in 10 of the most cited domains on Gab linked to hate groups and disinformative sites, and at least one frequently referenced site may have been an information campaign targeting the election. YouTube was the most frequently cited domain. 
  • A study at the University of Cambridge demonstrates that exposure to the methods of disinformation can help inoculate the public against false information. This “fake news vaccine” was developed as a video game (playable here) in which participants manipulated news and social media themselves—which in turn reduced their susceptibility to disinformation by 21%. Promisingly, this trend was most pronounced amongst those most inclined initially to believe false information. 
  • On OneZero, UN Special Rapporteur on freedom of expression David Kaye calls for “A New Constitution for Content Moderation,” arguing that the global impacts of both disinformation and responses to disinformation necessitate an approach from platforms that relies on decentralized decision-making, human rights standards, decision-making transparency, and industry-wide accountability and oversight. Kaye also discusses the role of government to monitor the companies and protect policies such as platform immunity. 

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share Share
Tweet Tweet
Forward Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list