Copy
Digital Deception Decoder 
August 12, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Note: On vacation next week, so we'll resume on Monday, August 26.
  • Last Sunday, DDoS protection company Cloudflare’s CEO Matthew Prince announced that the company would stop serving 8Chan because of its “lawless” nurturance of multiple acts of violence, including the El Paso, TX shooting. Concerned about the power Cloudflare exercises as part of web infrastructure, Prince sought to outline guardrails for this type of decision-making in keeping with rule of law: “where platforms have been designed to be lawless and unmoderated, and where the platforms have demonstrated their ability to cause real harm, the law may need additional remedies. [...] And, in some cases, it may mean moving enforcement mechanisms further down the technical stack.”
  • The stack: Commentators are similarly concerned about (though not necessarily opposed to) the decision. In the Atlantic, online speech scholar Evelyn Douek considers Cloudflare’s decision in light of its implications for debates about content moderation and rule of law. And on the Stratechery blog, Ben Thompson usefully reviews the decision in light of Section 230 of the Communications Decency Act and starts to outline a framework for moderation within “the stack” (at the top: public-facing platforms like Facebook and 8Chan; at the bottom: internet service providers), suggesting that Prince’s decision should be treated as a “backstop” against the absence of action by public-facing platforms.
  • In The New York Times, Daisuke Wakabayashi offers a helpful overview of the history and debates surrounding CDA 230—especially timely given the heat the clause is drawing from both sides of the aisle (with conservatives arguing that it incentivizes platforms to engage in biased censorship, while liberals suggest that it immunizes platforms from moderating truly harmful content). And on a related note, in Politico, Margaret Harding McGill and Daniel Lippman report that the White House is considering an executive order to tackle social media companies’ supposed content moderation bias against conservatives (a baseless allegation). 
  • In the Washington Post, political communications scholars Shannon C. McGregor and Daniel Kreiss discuss their research showing that Facebook and Google’s political ad approval processes are largely opaque to campaigns, with decisions to allow or ban activity enforced without transparency.
  • Long (but important) read: In the Journal of Design and Science, media scholars Brian Friedberg and Joan Donovan outline the dangers of “pseudoanonymous influence operations”—accounts impersonating people from minority and marginalized groups—that are “being deployed as a means of weakening the visibility and political communication of social movements for short-term political gain, often at the expense of groups seeking civil rights protections.” (Ex. the Internet Research Agency’s Blacktivist account.) The authors offer broader insight into debates over the role of anonymity in political speech. 
  • At BuzzFeed, Craig Silverman delves into new research on the evolution of digital disinformation tactics in the recent midterm elections in Philippines, suggesting that these tactics presage what is to come in the 2020 U.S. election. The study’s authors, Jonathan Corpus Ong, Ross Tapsell, and Nicole Curato, identify three trends in their New Mandala paper: a move from celebrities to “micro-influencers,” a focus on closed groups, and the increasing centrality of information operations to campaigns.
  • Evidence of digital deception surrounding the July Democratic debates continues to emerge. On Medium, Nir Hauser, co-founder of AI startup VineSight, details the most viral examples of disinformation about candidates (the largest shares targeting Joe Biden and Kamala Harris). In the Wall Street Journal, Maureen Linke and Eliza Collins reveal that hundreds of “bot-like” accounts spread racist and inflammatory disinformation during the debates, continuing to play into conspiracy theories about Harris. 
  • For Vox, Sigal Samuel writes that the “final privacy frontier”—our brains—may be broached as a result of research into brain-machine interfaces conducted by Facebook and other companies. While this “mind-reading” technology would be invaluable for people with disabilities that hamper communication, Samuel observes that it also raises major ethical concerns about privacy, algorithmic accountability, changing norms, existential alienation, and (lack of) oversight—with implications for political communication as well. (And, on a related note, both Twitter and Instagram copped to data leaks in the last week.)
We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan
Share Share
Tweet Tweet
Forward Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list