March 29, 2019
Produced by MapLight and the DigIntel Lab at the Institute for the Future, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
- Facebook announced Wednesday it was banning any explicit “praise, support and representation of white nationalism and white separatism” on Facebook and Instagram -- a move that was quickly cheered by civil rights advocates. Before, Facebook had distinguished between “white supremacy” (which was not previously permitted) and “white nationalism” and “white separatism” (which were permitted). Now, users attempting to search for or post content related to white nationalism will be directed to Life After Hate, a non-profit founded by former extremists working to help people leave the “violent far-right.” Motherboard reports that after Facebook spoke with experts it “decided that white nationalism and white separatism are “inherently hateful.” But will the ban work? Louise Matsakis explores that question for WIRED, noting that a lot of white nationalist rhetoric is coded and implied -- making it harder to track and eliminate.
- In preparation for the 2020 Census, the Census Bureau has asked tech giants Google, Facebook, and Twitter for help against disinformation campaigns designed to interfere with an accurate count. So far, details of what that collaboration will look like are limited, according to Reuters. In the meantime, the Supreme Court will hear arguments next month about the Trump administration’s attempt to put a citizenship question on the census, which research suggests could also limit participation.
- Backing down on the backfire effect. The “backfire effect” holds that telling someone that a claim that supports their underlying ideological beliefs is wrong actually makes them believe the claim more strongly. But new research from UK-based Full Fact examining seven existing studies says not so fast. Full Fact says we still need more evidence, but that “while backfire may occur in some cases, generally debunking can make people’s beliefs in specific claims more accurate.” Laura Hazard Owen has a great summary of the findings in Nieman Lab.
- In other Facebook news, the company announced it had removed more than 2,500 pages, groups, and accounts connected to inauthentic behavior and disinformation campaigns in Iran, Russia, Macedonia and Kosovo. Facebook says it “ didn’t find any links between these sets of activities, but they used similar tactics by creating networks of accounts to mislead others.”
- Digital privacy and poverty: In a thoughtful piece for Fast Company, Ciara Byrne outlines how poorer Americans are disproportionately impacted by digital privacy violations. As Byrne explains, "how their data is used and abused, and the harm that they suffer as a result, shows one potential future for all of us."
- In the shallows: You probably already know about the threats posed by "deepfakes." Sam Gregory of WITNESS, which trains people to use video to fight for human rights, is pushing for an enhanced focus on tackling “shallowfakes” as well. In a post for the MIT Technology Review, Bobbi Johnson outlines Gregory’s contention that society must first get a handle on deceptive video that uses crude tactics like mislabeling a location to trick viewers.
- Headed to the Unrig Summit in Nashville? If so, you can catch MapLight’s Hamsini Sridharan talking about data protection and battling misinformation during a panel discussion at 2:15 Central on Friday (today!). Other panel members include Ash Bhat of RoBhat Labs, Ben Scott of Luminate, and Brandi Collins-Dexter of Color of Change. Learn more about the summit here.
We want to hear from you! If you have suggestions for items to include in this newsletter, please email email@example.com. - Hamsini Sridharan