Igor Sapozhkov / Dreamstime.com in Reason Magazine 2016
Associated Data
Abstract
As ordinary citizens increasingly moderate online forums, blogs, and their own social media feeds, a new type of censoring has emerged wherein people selectively remove opposing political viewpoints from online contexts. In three studies of behavior on putative online forums, supporters of a political cause (e.g., abortion or gun rights) preferentially censored comments that opposed their cause. The tendency to selectively censor cause-incongruent online content was amplified among people whose cause-related beliefs were deeply rooted in or “fused with” their identities. Moreover, six additional identity-related measures also amplified the selective censoring effect. Finally, selective censoring emerged even when opposing comments were inoffensive and courteous. We suggest that because online censorship enacted by moderators can skew online content consumed by millions of users, it can systematically disrupt democratic dialogue and subvert social harmony.
1. Introduction
In the run-up to the 2016 presidential elections, the moderators of a large online community of Trump supporters deleted the accounts of over 2000 Trump critics. The moderators even threatened to “throw anyone over our walls who fails to behave themselves” (Conditt, 2016). This phenomenon of silencing challenging voices on social media is not limited to the hundreds of thousands of designated moderators of online communities and forums; even ordinary citizens can delete comments on their own posts and report or block political opponents (Linder, 2016). To study this new form of censorship, we developed a novel experimental paradigm that assessed the tendency for moderators to selectively censor (a) content that is incongruent with their political cause (a political position or principle that people strongly advocate) and (b) the authors of such incongruent content. The studies also tested whether identity-related processes amplified the selective censorship of cause-incongruent content. Further, we tested whether the identity-driven selective censoring of political opponents’ posts occurs even when opponents express their views in a courteous and inoffensive manner. To set the stage for this research, we begin with a discussion of past literature on biased exposure to online content.
1.1. Biased exposure to online content: selective information-seeking and avoidance
Behavioral scientists have long noted that people create social environments that support their values and beliefs (McPherson et al., 2001). People gravitate to regions, neighborhoods or occupations in which they are surrounded by individuals with similar personalities (Rentfrow et al., 2008) or political ideologies (Motyl et al., 2014). Once in these congruent environments, people are systematically exposed to information that aligns with their own views (Hart et al., 2009; Sears and Freedman, 1967). In addition, people actively display biases in behavior (e.g. choice of relationship partners) and cognition (e.g. attention, recall, and interpretation of feedback) that encourage them to see more support for their beliefs than is justified by objective reality (Garrett, 2008).
Parallel phenomena can occur in virtual worlds. People often find themselves in online bubbles of individuals who share political beliefs and information with each other but not with outsiders (Adamic and Glance, 2005; Barberá et al., 2015). They also actively seek websites or online communities that support their pre-existing opinions (Garimella and Weber, 2017; Iyengar and Hahn, 2009), and follow or connect with individuals whose opinions they endorse (Bakshy et al., 2015; Brady et al., 2017). And when they process information that they encounter, they display confirmation biases that warp their visions of reality (Hart et al., 2009; Van Bavel and Pereira, 2018). Some evidence also suggests that in addition to actively seeking attitude-consistent online content, people also avoid attitude-inconsistent content (Garrett, 2009a). Importantly, biases in information seeking are strongest for content related to political and moral issues (Stroud, 2017) and are most prevalent among those who have strong views or ideologies (Boutyline and Willer, 2017; Hart et al., 2009; Lawrence et al., 2010).
Although researchers have investigated biases in how people seek, consume, or avoid information in online contexts, to the best of our knowledge they have yet to examine how people might influence the content to which they and others are exposed through censorship. It is increasingly possible for individuals to censor others in online contexts by deleting others’ comments on their own posts and pages (John and Dvir-Gvirsman, 2015; Sibona, 2014). For moderators of popular social media pages and large forums, the scope of their ability to censor is multiplied as they often exercise control over content that millions view (Matias, 2016a; Wright, 2006).
Censorship is more extreme than biased information seeking because, in addition to biasing one’s own online environment, censorship delimits the online content that other people are exposed to. Also, by silencing dissenters, censorship prevents them from voicing their views. And although the psychological processes underlying censorship may overlap with some of the defensive motivations producing selective information seeking (Hart et al., 2009), censorship may in addition entail a hostile motivation to nullify opponents of the cause.