|July 19, 2019
Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
We depend on your support to publish this newsletter.
Since we launched, it’s been 1.5 years of covering everything you need to know about digital deception in our political system. Please donate today
by contacting Hamsini Sridharan (firstname.lastname@example.org) or visiting
the MapLight website.
- Note: This week’s Decoder brought to you by MapLight intern Abby Luke.
- Last week, President Trump hosted a “social media summit” at the White House, inviting 200 conservative social media voices, including several noted peddlers of conspiracy theories and disinformation, to discuss the supposed “silencing” of conservative opinions and supporters of the president, reports Katie Rodgers at The New York Times. However, advocates note that it is hate speech that is being removed, not conservative speech. Meanwhile, in the lead up to this week’s Congressional antitrust hearing, the Times’ Nellie Bowles writes about how pushing back against Big Tech has made for “interesting bedfellows” across party lines.
- A Yahoo News investigation of the conspiracy theories surrounding the 2016 death of DNC staffer Seth Rich found that Russian Intelligence was behind the disinformation about the murder, reports Michael Isikoff. Claiming that Rich was a whistleblower who was gunned down by assassins working for Hillary Clinton, the Russian conspiracy theory was subsequently picked up by the far-right. In a six-part podcast series called “Conspiracyland,” Yahoo News further details Russia’s involvement.
- Libra: Earlier this week, the Senate Banking Committee held a hearing on Facebook’s controversial cryptocurrency project. Pressed by committee members, Facebook’s David Marcus maintained that the site would not collect any data from Libra transactions without permission, details Russel Brandom for The Verge. Libra’s implications for digital deception remain unknown.
- In Wired, former employee Guillaume Chaslot explains how YouTube’s recommendation engine propagates dangerous cycles of misinformation. The AI behind YouTube’s recommendation system narrows down the content that a user is interested in (reducing the likelihood that harmful content will be flagged as recommendations get more tailored). Engagement with misinformation leads creators to produce more harmful content, reinforcing the feedback loop. (Related: researchers at Google’s DeepMind determined that feedback loops ultimately can impact one’s worldview.)
- Free speech? In Nieman Lab, Alana Schetzer warns about the dangers of a “fake news” ban, as more countries continue to implement such policies. Under this restriction, governments could impose serious limitations on speech by labeling attacks on their authority as “fake.” Meanwhile, at Slate, communications law professor Nina Iacono Brown asserts that Congress may be rushing to address the threat posed by deepfakes without adequate consideration of First Amendment protections.
- Digital ads: In The Fulcrum, Tristiaña Hinton reveals that 2020 Democratic candidates have spent more on Facebook ads promising to address dark money than any other policy area, mostly driven by Sen. Amy Klobuchar. Klobuchar is also a sponsor of the Honest Ads Act. Meanwhile, at the Wall Street Journal, Emily Glazer and Patience Haggin note that Google’s ad transparency database is full of errors.
- In Social Media and Society, communications scholar Whitney Phillips traces the predecessors of trolling and disinformation in seemingly innocuous “internet culture.” Phillips writes that “fun and funny and apparently harmless things [...] have a way of establishing precedent and a step-by-step media manipulation guide that is easily hijacked by those looking to do harm, whose actions often fly under the radar—because those actions look familiar, because they look like the things that used to be fun.”
- In the Washington Post, Geoffrey Fowler lays out how the viral photo application FaceApp grants the Russian company “perpetual access” to uploaded photos. Many are raising concerns about the app’s potential us as part of a broader disinformation campaign. In response, Senator Chuck Schumer called upon the FTC and the FBI to investigate the app’s usage of potentially sensitive personal information, reports Emma Bowman for NPR. In the MIT Technology Review, Karen Hao discusses the many ways the app’s data could be used—such as to generate deepfakes.
We want to hear from you! If you have suggestions for items to include in this newsletter, please email email@example.com. - Hamsini Sridharan