Copy
Digital Deception Decoder 
May 4, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • At this week’s F8 developers conference, Mark Zuckerberg announced a redesign and shift to Groups as part of the platform’s new focus on privacy, as described by Mike Isaac for The New York Times. In the Washington Post, Cat Zakrzewski explores concerns that this shift will reproduce existing disinformation problems. Poynter’s Daniel Funke and Susan Benkelman discuss how moving to ephemeral messaging would complicate fact-checking. And The Outline’s Casey Johnston points out that the shift to “private” channels won’t stop Facebook from using metadata to continue to target ads in invasive ways (business as usual). Meanwhile, Todd Haselton reports for NBC News that Google will soon allow users to control how long it stores their location and browsing data.
  • At Vox, Kurt Wagner dives into the history of Facebook’s struggles with the move to mobile, connecting it to the company’s virtual reality investments, defensive stance vis-a-vis Google, loss of public trust, and threat of regulation. (As reported in WIRED last year, a book of inspirational quotes distributed to Facebook employees on the eve of its IPO included the line, “If we don’t create the thing that kills Facebook, someone else will.”)
  • On CNN, Open Markets Institute’s Sally Hubbard challenges both Facebook’s supposed turn to privacy and the FTC’s milquetoast enforcement, noting, “Facebook's spying on citizens is a business choice, and the FTC allowing it to do so is a policy choice.” She includes several options the FTC could take to promote meaningful privacy and competition. Politico’s Nancy Scola examines measures beyond fines that the FTC is considering in its settlement with Facebook, including the creation of an “independent” privacy oversight board within the company and increasing accountability for Zuckerberg. And Vox’s Matthew Yglesias explains the rationale behind mounting political pressure to break up Big Tech.
  • Banhammer: On Thursday, Facebook banned Alex Jones and InfoWars, Louis Farrakhan, Laura Loomer, Milo Yiannopoulos, and others from both Facebook and Instagram for their advocacy of violence and hate, according to Casey Newton at The Verge. Facebook’s move comes on the heels of yet another shooting—this time, at a California synagogue—that was organized and advertised on social media. In The New York Times, Charlie Warzel bleakly observes that between this attack and the Christchurch massacre, “it seems real-world murderous hate crimes have become a message board meme of sorts” for white nationalists. And DFR Lab’s Emerson T. Brooking breaks down the digital connections between the two incidents.
  • For WIRED, Paris Martineau offers a little-considered perspective on the problems of digital media, disinformation, and violence: the emotional toll it has taken on the researchers striving to illuminate the problems and fight for solutions. Among other factors, there is a feeling of disillusionment with the internet’s early promise—a theme that The Atlantic’s Alexis C. Madrigal picks up on in his essay on “The End of Cyberspace.” Madrigal ends on a more hopeful note: “But as cyberspace breaks down as an organizing concept for what people do with their internet devices, it opens space for rekindling the concept of what the internet should be, normatively.”
  • In an op-ed in Quartz, Harvard STS fellows Maciej Kuziemski, Nina Frahm, and Kasper Schioelin argue that both Zuckerberg’s technological solutionism and Warren’s antitrust perspective are missing one element: empowering the public to be part of the policymaking process and decide their own relationship to tech. They suggest that current frameworks being considered reinforce the corporate models that created the problems.
  • Targeting: For The New York Times, Stuart A. Thompson illustrates how ad targeting works, discussing where data comes from, how data brokers operate, and how narrow slices of the population can be specified. Meanwhile, at Popular Information, Judd Legum continues to find hundreds of examples of misleading advertising by the Trump campaign that violate Facebook’s policies. In his conversations with Facebook, Legum discovered that the platform has been relying heavily on algorithms to vet political advertising. Meanwhile, a team of researchers working with Mozilla has determined that Facebook’s Ad Archive API is missing several crucial pieces of information, such as ad targeting criteria.
  • The Social Science Research Council announced the first cohort of Social Media and Democracy Research Grant recipients, who will receive access to Facebook data (2017 onward) in order to conduct a variety of projects researching the role of digital media in the spread of disinformation and political polarization in elections around the world. A Facebook, Elliot Schrage and Chaya Nayak discuss the data sets that will be available to researchers, and how they have sought to protect user privacy.
  • DC friends: Join MapLight's Ann Ravel (former Chair of the FEC) and DigIntel's Sam Woolley on May 15 for a lunchtime discussion about digital deception in the 2020 election. RSVP here.

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list