Copy
Digital Deception Decoder 
April 5, 2019

Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Algorithmic bias: Two weeks ago, Facebook agreed to disallow targeting of housing, job, and credit ads by gender, race, and other potentially discriminatory demographics. Immediately after the settlement with civil rights groups, it was charged with violating the Fair Housing Act by HUD. Now, according to Samantha Cole at Motherboard, a study by researchers at Northeastern, USC, and Upturn indicates that beyond targeting criteria entered by advertisers, Facebook’s algorithms may skew ad delivery by race and gender due to market and financial optimization (full paper here).
  • Meanwhile, in addition to changing its ad targeting criteria, Josh Constine reports for TechCrunch that Facebook recently expanded its political ad archive into a broader “Ad Library” that includes all ads run on the platform. On a related note, Mozilla, working with 10 independent researchers, has released a set of guidelines for political ad APIs that support true transparency by facilitating election monitoring and independent research.
  • Last Friday, Mark Zuckerberg published an op-ed in the Washington Post calling for government regulation to impose standards and accountability in four areas: harmful content, election integrity, privacy, and data portability. While regulation is certainly necessary, Zuckerberg’s proclamation was met with some well-earned skepticism. In a Bloomberg interview, the Open Markets Institute’s Sally Hubbard argues that some of Zuckerberg’s proposals are “silly,” and mainly intended to try to block more stringent regulation of Facebook’s core business model. Meanwhile, for The Verge, Casey Newton unpacks the strategy behind Facebook’s recent apparent concessions.
  • In the span of little more than a week, Google launched and cancelled an AI ethics board. At Vox, Kelsey Piper examines the furor that broke out over the board’s inclusion of the conservative Heritage Foundation’s president, as well as its structural limitations. In The Verge, James Vincent examines the recent trend of AI ethics boards among tech companies—which often lack transparency to the public or meaningful accountability standards, rendering them effectively PR stunts to avoid government regulation. “Ethics washing,” as academic Ben Wagner puts it. Meanwhile, Facebook has opened a public input process for the design of its content moderation oversight board; hopefully they will bear some of these lessons in mind.
  • In Bloomberg, Mark Bergen paints a disheartening picture of failures by YouTube leadership to heed warnings about toxic content. According to the report, Susan Wojcicki and other executives were made aware of the ways their recommendation engine surfaces disinformation and other types of problematic content, but chose not to address the issues until it was too late.
  • BuzzFeed’s Craig Silverman explores the role played by old age in the spread of digital disinformation. As Silverman points out, senior citizens are soon to be the largest age group in the U.S. population and are already among the most politically active—but, as research shows, there is a significant digital literacy gap that makes them more likely to fall prey to disinformation, scams, and other problems. As the population ages, these dynamics will only be exacerbated unless they are addressed.
  • Next week, voting begins in India’s general election, a process that will take several weeks for the world’s largest democracy. On Monday, Facebook announced that it had removed more than 1,000 pages, groups, and accounts based in India and Pakistan that were flagged for “coordinated inauthentic behavior” around the election, writes Christine Fisher in Engadget. For The New York Times, Vindu Goel and Sheera Frenkel examine the company’s efforts to stem the tide of disinformation in India, noting that, “Facebook’s performance will be a prelude for how it navigates a likely onslaught of propaganda, false information and foreign meddling during the 2020 presidential election in the United States.”

We want to hear from you! If you have suggestions for items to include in this newsletter, please email hamsini@maplight.org. - Hamsini Sridharan

Share
Tweet
Forward
Brought to you by:
 
MapLight     DigIntel Lab
Copyright © 2019 MapLight, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list