Copy
Digital Deception Decoder 
May 18, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • For the BBC, Sally Adee discusses moves by countries around the world, particularly Russia and China, to impose national boundaries on the open internet via control of various forms of telecommunications infrastructure. The idea of a “sovereign internet” isn’t new; however, in recent years, it has gained traction not only among clearly authoritarian nations but among countries like Brazil and India, which researchers at New America Foundation call “digital deciders.”
  • Busted: This week, AP’s Isabel Debre and Raphael Satter explore Facebook’s recent ban of an Israeli company running coordinated influence campaigns to affect elections in countries in Sub-Saharan Africa as well as Southeast Asia and Latin America. BuzzFeed’s Craig Silverman covers a new report by the University of Toronto’s Citizen Lab exposing the news-spoofing tactics of an Iranian disinformation network. Daniel Boffey reports for The Guardian that a new analysis suggests that half of all Europeans could have been exposed to Russian disinformation targeting the European elections. And in the U.S., Alex Kasprak at Snopes exposes an evangelical Christian network posing as diverse groups on Facebook while spreading pro-Trump messages—and its connections to GOP donors and PACs.
  • The Washington Post’s Tony Romm and Drew Harwell discuss why the U.S. may have declined to sign on to the Christchurch Call, a non-binding agreement to counter extremism online endorsed Wednesday by 18 governments and 5 major tech companies. The agreement is led by New Zealand’s government following the March shooting that was livestreamed on Facebook; in response, Facebook announced new rules this week banning people who violate its Community Standards from livestreaming and promised it would continue to investigate the digital media manipulation tactics that enabled the Christchurch video to persist and spread long after the shooting.
  • Tracking: Garett Sloane reports in AdAge that Facebook is preparing advertisers for its new “Clear History” tool, which will allow users to delete data gathered by the platform on off-Facebook browsing behavior—a hit to its ad targeting capabilities. Why is this needed? At CNET, Alfred Ng debunks the theory that Facebook is listening to conversations via people’s phones and using what it hears to target ads. As Ng points out “Facebook learns about your preferences through hundreds of data points—information like where you are, what you've bought, what you've looked for online and who your friends are can help tech giants make scary-accurate predictions.”
  • The Hill’s Olivia Beavers looks into a new project by UC Berkeley’s Henry Farid attempting to get ahead of potential deepfakes by mapping the biometrics of 2020 presidential candidates.
  • A recent report from Stanford’s Global Digital Policy Incubator, ARTICLE 19, and David Kaye (UN Special Rapporteur on Freedom of Opinion and Expression) examines the idea of creating multistakeholder social media councils in every country to assist with the creation of content moderation guidelines.
  • In the Boston Review, scholars Henry Farrell and Bruce Schneier analyze “Democracy’s Dilemma”—the idea that “the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate,” particularly in new digital contexts. Stanford’s Riana Pfefferkorn offers thought-provoking commentary on Farrell and Schneier’s call for greater transparency, arguing that anonymous speech, too, is important in democracies. She argues that the underlying issue is a lack of trust in government and institutions (as a result of issues such as money in politics), and more needs to be done to address the root of the problem.
  • Bandwagon: In The Verge, Casey Netwon offers a helpful roundup of Democratic presidential candidates’ positions on breaking up Facebook and regulating tech companies more broadly. Thus far, Sanders and Gabbard have joined Warren’s call to break up the company, while Harris, Biden, and Buttigieg have shown interest in the idea.
  • It’s important to document the wins. This week, Facebook announced that it would be improving the pay of its contract workers across the U.S., including its content reviewers, and providing them with additional resources. The announcement follows significant efforts by researchers and journalists to expose the conditions under which these workers operate; Slate’s Kate Klonick discusses the work that led to this victory, starting with in-depth research by UCLA’s Sarah Roberts. For a related conversation about invisible labor in Silicon Valley, see Angela Chen’s interview of anthropologist Mary L. Gray in The Verge.
  • According to a new survey from Pew Research Center, publics in emerging economies are equal parts optimistic and concerned about growing access to the internet and social media. Researchers surveyed respondents in 11 countries and found that majorities in 8 of them believe that social media platforms increase risk of manipulation by domestic politicians, and a similar proportion also think the platforms increase risk of foreign interference in elections.

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list