|November 26, 2019
Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
We want to hear from you!
- Note: Longer newsletter this week to make up for a couple of weeks off — there’s a lot to catch up on!
- You might know him best as Borat, but Sacha Baron Cohen’s keynote speech for the Anti-Defamation League’s Never is Now Summit establishes him as a sharp critic of unaccountable tech platforms. Baron Cohen offers a sharp rebuttal of Zuckerberg’s recent address at Georgetown on free expression: “This is ideological imperialism—six unelected individuals in Silicon Valley imposing their vision on the rest of the world, unaccountable to any government and acting like they’re above the reach of law. It’s like we’re living in the Roman Empire, and Mark Zuckerberg is Caesar. ”
- The political ad policy war wages on. Most recently, Google entered the fray, announcing that it will ban microtargeting for election ads and clarifying that it prohibits false claims in advertising. In The New York Times, Nick Corasaniti and Matthew Rosenberg report that campaigns dislike the policy, which may increase their advertising costs. Facebook may join Google in limiting microtargeting for political ads, but is holding firm on its policy not to fact-check them, according to Dylan Boyers at NBC. Meanwhile, Business Insider’s Tyler Sonnemaker writes that Snap does fact-check political ads, per CEO Evan Spiegel.
- Responses: For Slate, Microsoft Research social media expert Tarleton Gillespie argues that all online advertising should be made transparent via the types of mechanisms contained in the Honest Ads Act. UNC’s Daniel Kreiss and Matt Perrault (a former director of public policy at Facebook) also suggest policy ideas for platforms, including moving away from “data opacity” in ad targeting, limiting categories advertisers can target, facilitating counter-speech, and donating political ad proceeds to organizations focused on election integrity. And in The Guardian, Julia Carrie Wong asks an important question about the implications of Facebook’s new political ad policy and claims about free speech for the global context.
- Fighting Facebook: Kristen Clarke, President of the Lawyers’ Committee for Civil Rights Under Law, recently penned a letter detailing how Facebook’s non-fact-checking policy for political ads and other civil rights offenses leave it open to liability from a variety of angles. “The First Amendment is not the only amendment. [...] If Facebook wants to model itself on our constitutional values, then it must also prioritize the right to vote, due process, and equal protection.” And for The Hill, Emily Birnbaum discusses the launch of a new progressive coalition: the Campaign to Regulate and Break Up Big Tech.
- For CNN, Brian Fung and Donie O’Sullivan describe how the first presidential impeachment hearing was marked by conspiracy theories and ad spending on social media. Vox’s Emily Stewart and Rani Molla delve deeper into the Trump campaign’s enormous spending on impeachment-related ads on Facebook. Meanwhile, the Times’ Sheera Frenkel reports that YouTube and Facebook are struggling to block attempts to reveal the whistle-blower who set the inquiry into motion.
- Ad databases: As the debates over political ad policy rage, at the Center for Responsive Politics, Anna Massoglia and Karl Evers-Hillstrom report that presidential candidates have spent more than $100 million on digital advertising since platforms started publishing this data last year. And Issue One’s Michael Beckel, Alex Matthews, and Amisa Ratliff have published a useful guide to Facebook, Google, and Twitter’s ad transparency databases — and their flaws.
- Deepfake democracy: For WIRED, Victoria Turk argues that we don’t need to wait for deepfakes of (male) politicians to emerge for the technology to affect democracy; it’s already doing so through widespread online attacks on women. Turk writes, “The intimidation and silencing of women – through whatever means – is not compatible with a healthy democracy. While we should be wary of the potential for deepfakes to manipulate and falsify political messaging, we need to recognise the very real threat posed by deepfakes used to target and harass women.” On a related note, Twitter has released a new policy on labelling deepfakes for public comment by November 27 (tomorrow!).
- Tactics: A recent study published by the Social Science Research Council finds that organized networks of bots and cyborg accounts spread Islamophobic narratives about Muslim candidates in the 2018 midterms. In a new white paper for Stanford’s Internet Observatory entitled “Potemkin Pages and Personas,” Renée DiResta and Shelby Grossman examine Russian GRU influence operations through 2019, finding that the agency has updated “narrative laundering” techniques for social media and is engaging in “hack-and-leak operations.” And the Washington Post’s Elyse Samuels and Monica Akhtar speak with researchers to learn more about how bot networks have evolved since 2016. In CNN, Ben Popken describes one example: misinformation tracking company VineSight found evidence of bot activity in disinformation surrounding the Louisiana and Kentucky gubernatorial races.
- Podcast rec: Social media scholars Quinta Jurecic, Evelyn Douek, Kate Klonick, and Alina Polyakova are joining forces for a new Lawfare podcast series on disinformation leading up to the 2020 election. They call it “Arbiters of Truth.”
If you have suggestions for items to include in this newsletter, please email email@example.com
. - Hamsini Sridharan