Copy
Digital Deception Decoder 
April 12, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • In TechCrunch, MapLight’s Ann Ravel calls for tech companies to coordinate better with one another to achieve meaningful transparency for political advertising. Ravel observes, “The lack of consistency in disclosure across platforms, indecision over issue ads, and inaction on wider digital deception issues including fake and automated accounts, harmful micro-targeting, and the exposure of user data are major defects of this self-governing model.”
  • Regulators outside the U.S. are considering bold moves to tackle digital deception. Colin Lecher reports for The Verge that UK lawmakers are considering a proposal to impose a duty of care on technology companies, which will be enforced by a new regulatory authority. In the MIT Technology Review, James Ball discusses fears among privacy advocates that the mandates the UK is considering may cross into censorship if interpreted too broadly. Meanwhile, according to a joint project by BuzzFeed News and the Toronto Star, the Canadian government is seriously considering a range of options for regulating tech due to the industry’s failure to self-regulate.
  • Meanwhile, in the U.S., the case for regulating tech remains up for some debate. In The New Yorker, Sue Halpern examines the gaps between Mark Zuckerberg’s calls for regulation and the reality of inaction by lawmakers and regulators. In The New York Times, Sarah Jeong points out that Zuckerberg’s proposal for the creation of an independent content regulator for digital speech is wildly unlikely in the American legal context. And the Electronic Frontier Foundation’s Corynne McSherry and Gennie Gebhart break down the limitations of Zuckerberg’s “speech police” proposal and the dangers of a national privacy law that preempts stronger state-level rules.
  • Some lawmakers are moving forward with legislation. In Bloomberg, Ben Brody and Gerrit De Vynck report that Sen. Mark Warner (D-Va.) and Sen. Deb Fischer (R-Neb.) have introduced a bill banning “dark patterns”—design features that manipulate users into giving up their data. And in NBC News, Cyrus Farivar explains the recently introduced Algorithmic Accountability Act of 2019, proposed by Democratic senators Cory Booker (N.J.) and Ron Wyden (Ore.) and Rep. Yvette D. Clarke (D-N.Y.), which authorizes the FTC to scrutinize automated decision making processes for bias.
  • In this must-read piece for Mother Jones, Pema Levy and Tonya Riley review the years-long battle that civil rights advocates have faced to get Facebook to take discrimination, hate speech, and voter suppression seriously. Groups such as Color of Change, the Center for Media Justice, and Muslim Advocates were only able to gain traction with the company after the Russian interference and Cambridge Analytica scandals broke. Meanwhile, according to Judd Legum in Popular Information, Trump’s campaign is running microtargeted ad variants on Facebook designed to manipulate black voters and women.
  • Major platforms have introduced further incremental changes to try to contain disinformation. At Engadget, AJ Dellinger reports that Twitter has limited the number of accounts a user can follow daily to 400 in an attempt to block spammers and bots. In WIRED, Emily Dreyfuss and Issie Lapowsky document a number of new “integrity” measures announced by Facebook, including the introduction of a “Click Gap” metric to compare the popularity of content on Facebook with its popularity on the rest of the internet (to determine whether it is being artificially promoted); changes to Groups to downrank those that regularly spread misinformation; “Trust Indicators” on news stories; and more.
  • More hearings: This week, the House Judiciary Committee called on Facebook and Google to explain their actions regarding white nationalism. Per Tony Romm in the Washington Post, comments on a YouTube livestream of the hearing were shut down after they were, unsurprisingly, hijacked by hate speech. (If you’re curious about Facebook’s recent ban of white nationalist content, check out Bryan Menegus in Gizmodo on the company’s policy considerations.) And because the myth of conservative bias by social media companies persists, the Senate Judiciary Committee held a hearing on the topic, as covered by Makena Kelly at The Verge.
  • Meanwhile, researcher Jonathan Albright offers an interesting breakdown of the rise of hate speech on social media. “Without a doubt, our over-reliance on discovery and recommendation algorithms, as well as the ubiquity of social apps and networking platforms — a market controlled almost exclusively by a handful of American-based multinational companies— have helped fuel the rise of hateful ideologies.”
  • The New York Times has launched “The Privacy Project,” a monthslong examination of technology and privacy issues. The project includes commentary from a number of experts, including Charlie Warzel on tech’s broken promises, Emily Chang on privacy as a feminist issue, and Tim Wu on the historical ties between privacy and capitalism.
  • Space oddity: This week, we were captivated by the first image of a black hole. Naturally, as Patrick Howell O’Neill reports in Gizmodo, this prompted YouTube’s recommendation algorithm to promote a space-related conspiracy theory.

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list