Copy
Digital Deception Decoder 
September 14, 2018

MapLight and the Digital Intelligence (DigIntel) Lab at the Institute for the Future have teamed up on a free newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Kings: Evan Osnos’s profile of Mark Zuckerberg in The New Yorker is full of insightful gems, like this connection between Zuckerberg’s reverence for Augustus Caesar and Facebook’s culture (given the many questions about Facebook’s role in democracy): “Like Augustus, he is at peace with his trade-offs. Between speech and truth, he chose speech. Between speed and perfection, he chose speed. Between scale and safety, he chose scale.” And for another peek into Silicon Valley royalty, see Mark Bergen and Austin Carr’s piece for Bloomberg on Alphabet CEO Larry Page, who has been conspicuously absent from current debates.
  • That said, read profiles of tech leaders with a grain of salt, as The Verge’s Casey Newton points out. Here, Newton’s talking about Jack Dorsey’s many recent interviews, though it applies to other tech executives as well: “It isn’t that the questions were bad, or that Dorsey sidestepped them. It’s that what he thinks is ultimately less consequential than what he does.” And Ellen Pao (tech sector diversity advocate and former CEO of Reddit) offers scathing criticism in WIRED of social media CEOs’ relentless prioritization of growth and subsequent claims of innocence when problems become apparent.
  • WIRED’s Issie Lapowsky takes a look at what she calls a “new playground for trolls”: the use of texting tools by political groups. In this case, a text message impersonating Beto O’Rourke’s campaign that called for volunteers to participate in voter fraud was circulated via Relay, drawing attention to the lack of oversight such services currently receive.
  • Per the MIT Technology Review, researchers in the UK have devised a mathematical model to simulate the effects of false news on voter behavior by treating disinformation as a form of communication noise that voters filter out with varying degrees of success. (Full paper here.) Their findings suggest that simply being aware of the problem may help immunize voters to its effects.
  • Another study—this one out of Australian National University—focused on the impact of Twitter bots during the first presidential debate of the 2016 election, according to CNET. Researchers found fewer bots than in previous estimates, but determined that they were 2.5 times more effective at influencing opinion than human social media users. (Full paper here.)
  • DC: With less than two months until the 2018 midterms, the federal government is still scrambling to respond to digital threats. CNBC’s Christina Wilkie reports that this week, President Trump signed an executive order authorizing sanctions as a response to foreign interference in U.S. elections. At Motherboard, Samantha Cole writes that lawmakers have requested a report from the Director of National Intelligence on the emerging threat of deepfakes. And Cecilia Kang discusses the FTC’s new series of hearings focused on the tech industry for The New York Times.
  • The Center for Digital Democracy has a new guide to online political manipulation for voters. “Enough Already!” starts with context from the 2016 presidential election (including Russian interference and the rise of digital advertising in domestic political campaigns) and identifies key tactics used by manipulative actors. The guide gives readers tips to avoid being pawns of targeted digital advertising, including ways to protect their privacy and build media literacy and identifies systemic issues that need to be addressed more broadly.
  • Fact checking Facebook: In his Popular Information newsletter, Judd Legum investigates how an explicitly conservative publication, The Weekly Standard, became an official Facebook fact checker despite having little experience in the area (and after failing their initial certification review by Poynter’s International Fact-Checking Network). Meanwhile, Daniel Funke writes for Poynter that Facebook’s fact-checking media partners around the world are increasingly coming under attack from trolls—with little support from Facebook.

We want to hear from you! If you have suggestions for items to include in this newsletter, email them to hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2018 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list