Skip to main content

“Welcome to Gab”: Exploring Political Discourses in a Non-Moderated Social Media Platform

Published onAug 17, 2021
“Welcome to Gab”: Exploring Political Discourses in a Non-Moderated Social Media Platform
·

The ecosystem of social media platforms such as Twitter and Facebook has increased the ease of global communication, particularly opinion sharing. As a predominantly online movement (Berger, “The Alt-Right” 4; Heikkilä 4), the alt-right—an umbrella term for loosely affiliated right-wing political movements across the world, centred in the United States (Berger, “The Alt-Right” 4)—has been proficient at utilizing and leveraging digital tools for recruitment and propaganda (Daniels, “The Algorithmic Rise”  64).1 Its propensity for adopting these technologies to amplify and share its ideologies with different publics and communities (Lewis 3; McIlroy-Young and Anderson 651) is connected to a much longer legacy of white supremacy (Daniels, “The Algorithmic Rise” 61). Being early adopters of new media technology, the alt-right is primed to interact with younger users who often spend significant time on these platforms (Lewis 15). Recent features in the New York Times (Roose) and Washingtonian (“What Happened After”) have shown that younger white men on various social media platforms have been “radicalized” by the rhetoric of the alt right. In part as a response to the extremist and conspiratorial content created by the alt-right on social media, several mainstream social media platforms have created stronger content moderation policies such as banning users who spread conspiracy theories, engage in hate speech, and repeatedly harass others online. This has resulted in a concurrent abandonment of mainstream social media platforms and the development of more content-permissive platforms, such as Gab (Berger, “The Alt-Right” 35).

Gab describes itself as “a social network that champions free speech, individual liberty, and the free flow of information online. All are welcome” (Gab Social). It emerged in response to the banning of several prominent personalities on Twitter, following a series of public events in which users promoted hate speech (Berger, “The Alt-Right” 35). The platform’s creation coincided with then Presidential Candidate Trump’s 2016 campaign. Gab’s promise of free speech translates into little content moderation (Lima et al. 515) and has attracted users who were banned from other social media platforms. As such, many alt-right supporters have chosen to become users of this social media platform, resulting in an “echo chamber” in which users are predominantly fed information that reinforces their own opinions and ideological views (Lima et al.; Thomas). As of 2021, it is estimated that Gab has more than 3 million users (Brown). After being banned by all mainstream social media platforms in the aftermath of the 2021 Capitol Hill Insurrection, the former American President Donald Trump offered to buy Gab as a way to gain his entry back to social media (Brown). As Gab has increased in popularity among the alt-right, and, recently, among established political actors (Maulbetsch), it is important to examine its discourses in depth. 

This study builds on existing research on the alt-right’s use of social media platforms such as Twitter and YouTube (Berger, “The Alt-Right”; Stern) and Lucas Lima et al.’s research on Gab which employs word clouds to examine dominant discourses on Gab. We use structural topic modelling (STM), an unsupervised machine learning method for text classification, to identify prevalent topics across a 5% random sample of Gab posts between August 2016 and July 2019. In this study, we closely examine the three most prevalent topics, and then analyze three additional topics that are closely connected to them. The six topics together provide both broad and nuanced understandings of the thematic content on the platform. We were particularly interested in how the coded language of the alt-right, which has racist, homophobic, violent, and misogynistic undertones, is not necessarily filled with overt expressions of hate. We begin with an overview of the literature on social media and the alt-right to introduce our study.  

Social Media and Hate Speech  

Recent research indicates that Gab has become more polarized since its inception in 2016, with an increase in polarized political content and right-wing–associated topics over time (McIlroy-Young and Anderson 653). Gab users initially engaged in community-building conversations, which shifted to alt-right political topics over time including increases in anti-Semitic language. Lima et al. examined the “echo chambers” of Gab and found that many extremist personalities were on Gab, including Alex Jones, Alex Linder, Milo Yiannopoulos, and Richard Spencer. Using word cloud analysis, the authors found that the majority of frequent terms on Gab pertain to political topics such as Trump, #MAGA, and #SpeakFreely. In addition, the authors employed perspective analysis, from a Google Perspective API, to detect whether a post is considered toxic or not.2 The authors argue that this method would help them detect whether the language on Gab is hate speech, and they found that the majority of posts have low and normal range toxicity scores. However, the missing piece of this approach using Google’s perspective algorithm is that it does not take into account hate language that is coded in a non-toxic, non-violent, banal way. Hate language that is coded with layers of metaphors or humour and other forms of offensive language would not be recognized as toxic or hateful with this algorithmic analysis. It is thus important to analyze Gab more systematically and examine its discourses not in terms of a predetermined toxicity score and overt expressions of hate, but in specific contexts and with a human understanding of language. 

New media scholar Wendy Hui Kyong Chun critically examines the concept of homophily, the idea that “similarity breeds connection” (60), to think through online interactions and the role algorithms play in them. In Chun’s argument, homophily drives the echo chamber phenomenon by organizing a cyberspace based on consensus and similarity of opinions and values. Homophily claims friendship and connection, but in practice begins with hatred which binds bodies together. Thus, the reason for groups to form is not love based on similarity (Ahmed 123) but rather hatred of the “other,” which is constructed as threat. Chun, therefore, argues that segregated online spaces are driven by hatred concealed by friendship or community. This is an important contribution to examining Gab because it suggests that ideas of friendship, community, and commonality can be rooted in hatred, such as in shared anti-Semitic sentiments, for example. 

Social Media and the Alt-right 

When Paul Gottfried, a humanities professor, coined the term “alternative right” in a speech in front of other right-leaning intellectuals in 2008 (Stern 14), the ecosystem of social media platforms was just emerging. Following this speech, Gottfried formed an “independent intellectual right” organization with Richard Spencer, a self-identified white nationalist, who two years later would launch AlternativeRight.com, which aimed to address the concerns of the “white nationalist intelligentsia” and develop a plan to grow the movement by reaching a younger audience (Hartzell 16, 18). Spencer took on this role and moved online to reach a wider audience for the alt right (Hartzell 11) whose activity on social media platforms like Twitter and Facebook has brought its discourse into public conversation (Heikkilä 10).

In their solution-oriented paper on socio-technical security, Matt Goerzen et al. frame social media platforms as sociotechnical systems that bring human and non-human actors together and therefore are governed not just by social but also technological norms (2). Sociologist Jessie Daniels, meanwhile, describes the alt-right as both an organizing force that manifests white nationalism and white supremacy and as an influence on internet culture, described as “the emerging media ecosystem powered by algorithms” (62). While white nationalists strategically spread anti-Semitic and racist ideologies through social media, algorithms have helped amplify their spread. She argues for the need to investigate the relationship between hate crimes, white nationalism, and the algorithms of search engines and social media platforms. Alice Marwick and Rebecca Lewis have investigated the strategies that the alt-right employs in this current media system, including the strategic use of memes and bots as well as their knowledge of internet culture and irony (36). Like Daniels, they look at the organizations behind the spread of racist and anti-Semitic ideologies and the current media ecosystem. Bertram Vidgen’s work on Islamophobia on Twitter demonstrates the way in which this platform, with its limited character count, encourages “aggressive antagonistic interactions” rather than democratic dialogue (74). In a 2018 report, Lewis describes the way YouTube rewarded content producers and their associated networks regardless of their content, a business model that has not only generated a community of followers for right-wing influencers but also provided them with monetary incentives to generate larger followings. 

 Shared Thematic Concerns Across the Alt-Right 

The alt-right movement is created through the spread of white nationalist and white supremacist ideologies and strategic use of the social media ecosystem. Like Daniels’s concerns about framing the alt-right merely as “the angry white male,” and thus around individual actors with extremist perspectives, this paper approaches the alt-right as creators of a network of communities that utilize various strategies and tools to produce and maintain discourses and activities (Daniels, “The Algorithmic Rise” 62; Marwick and Lewis 34). Looking at alt-right Twitter users, John Berger has identified four discourses that circulate among alt-right users, including support for Trump and white nationalism as well as anti-Muslim speech (“The Alt-Right” 6). The main influencers that Berger found in this network often identified as white nationalists, demonstrating that white nationalist sentiment underlies alt-right movements (Daniels, “The Algorithmic Rise” 62). Lewis points to the ways in which content creators on the right on YouTube have created an alternative influence network (AIN) that is a “fully functioning media system” offering an alternative to mainstream news and media (Lewis 4). Through this process, the AIN facilitates radicalization on the part of content creators and their viewers. For instance, mostly alt-right social media platforms such as 8chan and Gab have hosted white terrorists who were spreading their manifestos prior to their shootings (Thomas; McIlroy-Young and Anderson 651). These shooters were cultural producers of alt-right content whose messages were retweeted, reposted, and circulated. Understanding the alt-right as a community in which actors strategically use media to spread their ideological discourses requires the examination of the communicative practices of the alt-right (Nadler 6). 

Production and Maintenance of Culture

While Lima et al. note that the 2016 US presidential election motivated a large group of new users to join Gab, the platform has since attracted a fair number of users outside of the United States. In this way, the rise of the global right has created relationships with US alt-right movements, and while race-based nationalism is still highly relevant in unpacking the ideologies driving these discourses, a broader set of frameworks is needed to understand the relationships between these communities. Mark Davis points to the ways in which these communities can transcend geographic limitations and gestures towards a transitional movement that is paradoxically in “defence of national and other chauvinisms” (135). Using Australian extremist groups as an example, he articulates how they are nationally based but share transnational ideologies with common focuses on anti-refugee ideologies and anti-Semitism. In his study of Facebook posts, he notes the wide circulation of images and memes that also include a modified version of Trump’s campaign slogan and the use of Pepe the Frog.3

Davis notes the ways in which publicity, authority, and accountability to sovereign government changes as the far-right movement becomes transnational (134). Similarly, Niko Heikkilä (4) and Stern note that alt-right movements are largely decentralized and ideologically diffused. Through this fluidity and movement, the production of cultural objects becomes a way to engage with and sustain the alt-right. Both scholars point to the 2016 US presidential campaign as influencing how alt-right cultural objects, such as memes and troll culture, engage with mainstream discourse. Both trace these “new” objects to the longer history of white nationalism. Heikkilä (3) points to the ways in which white nationalism has always produced its own cultural objects such as films, music, and video games in opposition to the mainstream, while Stern uncovers the way in which the alt-right remixes and appropriates mainstream cultural products to facilitate its own retellings. Broadly, the alt-right engages with meta-politics. This means that it is more interested in the production of social norms and cultural products than in a direct engagement with politics. Hence, Hilary Clinton’s campaign strategy that brought out the “deplorables” provided the alt-right with attention and engagement from the hegemonic public that fueled, and continues to fuel, its strategies and movements. 

Heikkilä (38), Stern (18), and Banu Gökarıksel et al. (561) reflect an urgency in studying the alt-right. The tension between giving these movements attention or not has real risks and repercussions that are often violent and often happen in “real life” as well as in online spaces. Understanding the topics that proliferate in these movements’ discourses is a first step in interrupting recruitment and disassembling the movement. Looking into the Gab echo-chamber provides us with foundational knowledge with which to begin this work.

Data and Methods 

We acquired the Gab data from Pushshift.io through an application programming interface. Pushshift.io archives social media data, including data from Gab, making it publicly available for downloading (Baumgartner). Our dataset consists of all Gab posts from August 2016 to July 2019, totalling more than 10 million posts. This paper utilizes a 5% random sample of the entire dataset. We decided to focus only on posts in English by comparing the language variable provided by the dataset, as well as running CDL2 (complementary discrimination learning), which is a Google dictionary language detection algorithm. We removed spam posts and the automated “Welcome to Gab.ai” message which users receive when they join the platform. Our final data consisted of 403,469 posts created by 40,375 unique users. 

Structural Topic Modelling

Previous research looking at discourses on Gab used trigram topical analysis and toxicity scores (Lima et al. 519; McIlroy-Young and Anderson 653).4 To get a more contextualized understanding of the content on Gab, we chose to employ structural topic modelling (STM) (Roberts et al., “Structural”) to investigate the different topics that users discuss on Gab platform. STM is a method that classifies text data into topics for analyses. Specifically, STM allows for the contextualization of latent topics, and lets us examine not only the content of the posts within a specific topic but also how the content of the posts might be correlated to others. This process of contextualization gives insights on text data that other automated text analysis methods do not offer. Methods using simple term frequency (such as trigram analysis) or toxicity metrics do not necessarily demonstrate the ways in which topics might relate and how they are grouped beyond the frequency of the term. Our choice of STM is motivated by its success in analyzing many political and social datasets (Nelson et al.; Rodriguez and Storer).

Before employing STM, we transformed each post into individual tokens, disregarding word order, therefore creating a “bag of words” for each post. Terms were reduced to their stem form, such that “immigration” and “immigrating” become the common token “immigrat.” Additionally, stop words with no semantic meaning such as “the” and “of” were removed from the corpus. After this initial processing, we applied the structural topic modelling by Roberts et al. (“Structural”) to determine the topics within the corpus. Topic models are unsupervised machine learning models that estimate latent topics within and between text documents. In these models, a topic can be understood as a set of words representing interpretable themes, and documents are represented as a mixture of these topics. For each document, proportions across all topics sum up to 100%. For example, after fitting a topic model, a document can mostly be captured in the topic “American first” (proportion 70%), followed by “border control” (20%), and other topics (10%). In addition, STM allows us to include information that affects both the distribution of topics in the corpus, as well as the distribution of words in each topic. We incorporated follower counts, and “bot-written” posts as covariates in the model construction to analyze how topic proportions vary.

One of the challenges in using STM is to choose the appropriate number of topics for the corpus (Grimmer and Stewart). While there are methods for estimating the likelihood that the number of topics selected is accurate, there is currently no scientific consensus on how best to select the number of topics a model defines (Grimmer and Stewart). Advances in computational social science, however, have made the selection process much more data-driven (Roberts et al., “Structural”). The STM package uses an algorithm developed by Moontae Lee and David Mimno to estimate the number of topics. 

To further justify the final number of topics, we utilized the searchK function provided in the STM package in the R Programming Language (Roberts et al., “Structural” 12), and results indicate that a model with 85 topics is reasonably appropriate because it supports the highest semantic coherence, lowest residual, and has a high upper bound (Figure 1a).5 We also ran multiple models from 15, 20, 25, …, 110, 120 to compare the different aspects of the models (Figure 1b). Results showed that a model with 85 topics indeed ensures more semantic coherence than other models. Finally, we used the package stminsights to visualize how different models are compared, and to examine each topic in depth (Schwemmer). 

Figure 1a: Diagnostics for selection of K number of topics

Figure 1b: Final model comparison of GAB STM

The current study fits four STM models to the data. We document the process of selecting the number of topics (K) in Figure 1a and 1b. We used a set of metrics provided in the STM package: (1) held out likelihood, the log probability of topics in the test set correctly replicating topics in the training set (Wallach 1105); (2) lower bound convergence (Roberts et al., “Navigating” 61), the lower bound of the marginal log likelihood; (3) semantic coherence (Mimno et al. 263), the co-occurrence of words in a topic; and (4) residuals (Taddy 1184), the difference between expected and predicted topic predictions. Figure 1a shows that between 80 and 90 topics produce relatively low residuals and a maximized lower bound, though held-out likelihood and semantic coherence is not optimal. Eighty-five (85) topics were selected as the final K for the current data set and model.

Concurrent with K selection diagnostics, several test models were also run (Figure 1b). Model 2 uses Lee and Mimno’s algorithm to select K, with no prevalence variables indicated. Model 3 uses the K selection algorithm again but specifies whether the account is a bot and the number of followers as prevalence variables. The final model, model 4, specifies 85 topics and both aforementioned prevalence variables. Figure 1b indicates model 4, of all the models presented, offers the best model fit for the current dataset. In the following analysis, the research team closely examined the three most prevalent topics and then examined three more topics that were closely connected to the content of the most prevalent topics. 

Results

Figure 2: Top 20 topics ordered by the percentage of the dataset estimated to comprise that topic.

Figure 2 shows the top 20 topics estimated by model 4, including the seven most frequent words defining a topic within a model and the proportion of the dataset that contains the topic. The first three topics make up approximately 14% of the dataset: topic 7, topic 14, and topic 39. Among the top 20 topics, key terms such as “trump,” “maga,” “media,” “left,” “democrat,” and “law” suggest that many prevalent topics are politically driven. Even though there are some curse words, and some mention of “black” people, there are no racial slurs. To further understand these topics, and what specific posts contain, we examine six topics, including the three top topics and three associated topics in depth. Figures 2 to 4offer visualizations of the top 3 topics, showing two examples of Gab posts that make up each topic and a word cloud of the most frequent terms within each topic. 

Figure 3: Visualization of the top topic: topic 7

The most discussed topic on the platform, topic 7, illustrated in Figure 3, is suggestive of knowledge production and truth-making claims. The seven most topic-defining terms—“just, can, people, don’t, know, say, want”—and the two exemplar posts suggest that the theme of this topic centres on discussions of knowledge, truth claims, opinions, and beliefs. This conversation around knowledge production also corresponds to the alt-right’s narrative of questioning the reality that is presented in mainstream media, and that instead people should create their own versions of realities.

Figure 4: Visualization of top topic: topic 14

Figure 4 visualizes topic 14 and reflects the discussion of freedom of speech on Gab as well as the formation of a community of users that support this endeavour. The seven most defining words of the topic are “one, good, work, thing, way, really, that,” and the exemplar posts speak to the discussion of freedom of speech (“#SpeakFreely” in Figure 4). This corresponds to recent research (Schradie 164) which suggests “freedom of speech” is a rallying point for right-leaning groups to mobilize supporters. The ability to claim freedom of speech is important for both community building and constructing the platform as a place of truth making. The utilization of free speech facilitates the exploration of content that may be typically banned on mainstream platforms like Twitter and YouTube, and in this process of creation, it also helps to support community building with like-minded individuals. The hashtag “#GabFam” (Figure 4) refers to the idea that Gab users are connected as if they are members of a “family.” This fictive kinship suggests a deep identification with the platform and its community of users. Gab is then not simply a social media platform where individuals share their beliefs and opinions; it is also a place that actively promotes and organizes communities of people who share similar interests and ideologies. 

Figure 5: Visualization of top topic: topic 39

The third most discussed topic (topic 39 in Figure 4) is defined by the seven terms: “trump, maga, gabfam, president, speak free, obama, support.” Its underlying theme speaks to the platform’s political outlook. Key terms like “maga, gabfam, speak free” demonstrate that the 2016 American presidential campaign was one of the key conversations on this platform, which is also supported by the findings in Lima et al. (519). Similar to McIlroy-Young and Anderson’s findings, this demonstrates the prevalence of political discussions on Gab, especially in support of Trump. Trump’s election was a transitional moment for the alt-right. Heikkilä (10) finds that the alt-right populated mainstream social media platforms such as Twitter and Facebook with pro-Trump discourses. In the exemplar posts, the hashtag “#TRUMP” suggests that pro-Trump messages represent the continuity of the alt-right’s strategic use of social media to spread their political messages both on mainstream social media and on alternative platforms such as Gab. 

To summarize, the themes of the three top topics are knowledge production, the platform’s freedom of speechprinciple, community building, and political discussions about Trump’s campaign and presidency. We now examine topics 19, 27, and 35, which give more nuance to the above-mentioned themes and discuss in more depth what these narratives might tell us about the coded language use on the platform.

Figure 6: Visualization of topic 19

Figure 6 visualizes topic 19 with topic defining terms “expect, event, hide, ground, rip, gas, rip, extra.” A key theme in this topic is Holocaust denial, in which the posts reject the idea that gas chambers existed at concentration camps and thereby reject that a genocide happened. The first exemplar post reproduces the dominant Holocaust denial narrative that extermination chambers did not exist at Auschwitz (“Holocaust Denial and the Big Lie”). The second exemplar post uses the expression “Elie the Weasel” to denounce the famous author and late Holocaust survivor Elie Wiesel. The derogatory nickname has been used by white supremacists to undermine the truth of his work (Kaplan) and of his surviving Auschwitz and Buchenwald while losing his family members in the camps. This topic demonstrates explicit anti-Semitic views on the platform. In Daniels’  analysis of the discourses generated by five organized white supremacist groups in magazine publications in the 1980s, the dehumanization of Jewish people and Holocaust denial were among the most important discourses (White Lies). This same narrative of anti-Semitism is present in these current posts and demonstrates a continuation of these anti-Semitic views. This is one of the few overt expressions of hate speech that we found on the Gab platform. 

Figure 7: Visualization of topic 27

Figure 7 visualizes topic 27 with the topic defining terms “hour, less, minute, almost, red, half, type.” This topic centres on the popular alt-right terms “blue pill” and “red pill,” and the urgency  (“hour,” “less,” “minute”) to join the QAnon movement to be awakened to “the truth,” and face “the enemies.” In the first exemplar post, “Q” refers to the “Q” figure in the conspiracy QAnon movement. This movement began in response to Trump’s rhetoric and announcement that “a storm is coming,” which QAnon took as a reference to the event that Trump’s enemies, including liberal politicians, celebrities, and media personalities, would be arrested. The QAnon movement believes in the conspiracy theory that liberals are Satanists and child traffickers. The movement builds around this storm event, which would bring an end to their perceived mission. This first exemplar post was created in July 2019 in reaction to Trump’s staffer’s posting a picture of a clock on social media, which was interpreted as counting time toward the moment when Trump faced and fought his enemies, and enemies of QAnon followers (Rosenberg and Haberman). The post is a reminder of the urgency of this coming showdown. The second post centres on the metaphors of pills, which come from the movie The Matrix (1999), where the hero (Neo) is offered two pills: a red pill and a blue pill. The former refers to truth and knowledge, while the latter refers to ignorance. Neo chooses the red pill and leaves behind his life of deception and ignorance, one controlled by “the system.” The alt-right and QAnon use the metaphors of “red-pilling” and “blue-pilling” in the same way to refer to the moment of awakening and joining the movement. 

Figure 8: Visualization of topic 35

Lastly, Figure 8 shows topic 35 with topic defining terms “media, left, liberal, lie, truth, social, fact.” This topic speaks to the rejection of mainstream media as gatekeepers of truth. Gab users believe that mainstream media outlets refuse to present their views and silence their voices. Specifically, the critique is towards the liberal, leftist media, which is linked to anti-Semitism as the first exemplar post demonstrates. The repetition of “lie” in the first exemplar post is a defining term of the topic. This term discredits the veracity of knowledge production in the mainstream media. The exemplar posts speak to the alt-right’s opposition to mainstream knowledge production or what Trump identifies as “fake news” (Al-Rawi et al. 54). By attacking legacy news and watchdog news outlets, these posts fit into Trump’s fake news narrative and campaign. This campaign justifies Trump followers in their perceived positions as underdogs against an oppressive and censoring reality. The second exemplar post shared in mid-2017 (Figure 8) further criticizes leftist media and demands that this kind of “Anti-Government Media”—which stands for anti-Trump—should not be funded. This post is interesting because it suggests that a conservative group does not want to be labeled “Alt-right,” and the use of “Smear” suggests that it views the “Far Left” as accusing them wrongly of being part of the alt-right movement, which they do not want to identify with. The fact that this post is from the first six months of Trump's presidency suggests that early supporters of Trump do not want to be identified as the alt-right and want to distance themselves from this label.

Discussion

This current study employs a topic modelling method that allows for analysis of large-scale textual data. Using STM, we found and manually labeled 85 topics. We then examined the three most prevalent topics, followed by an analysis of three more closely related topics, which allowed for a more nuanced analysis. These six topics that we analyzed in depth focused on themes of knowledge production (all figures), freedom of speech (Figure 4), political dimensions (Figure 5), and red-pilling (Figure 7). 

In several topics, we find evidence of explicit anti-Semitic views, which is consistent with Daniels’ analysis on the continuation of white supremacy in magazine publications and in online spaces since the beginning of the internet (White Lies). This evidence is in conversation with and expands on McIlroy-Young and Anderson’s work, which showed that language used on Gab is negative regarding the portrayal of Jewish people, and that the word “Jew” was used in tandem with offensive language and ethnic slurs, unlike Google news which used mostly neutral terms (653). While anti-Semitic views are overt on the platform, other forms of hate content are coded and masked in respectable speech. We also found a distancing from the term “alt-right,” which further points to Gab users’ desire to view themselves as respectable people who have a conservative outlook, which is different from mainstream culture. Additionally, while the language about reality and truth and the opposition to mainstream media news is framed in a respectable, constitutionally protected way, it still reproduces ideologies of racism, misogyny, and other oppressive systems. The coded language masks the hateful content. 

A topical analysis of Gab’s discourses is an important contribution to scholarship on echo chambers that form online groups based on homophilia (Chun 60) and might segregate themselves from more mainstream platforms like Twitter and YouTube. This study finds evidence that Gab users are motivated by the freedom of speech principle of the platform, which allows for explicit anti-Semitic viewpoints and the spread of conspiracy theories (Figure 7). Users strongly desire alternative forms of truths and facts. This was shown in the red pill/blue pill discussion (Figure 7) and the denial of the Holocaust (Figure 6). Figure 8 demonstrates that Gab users reproduce a negative image of Jewish people as liars and controllers of mainstream news sources. Together with Figure 7, the notion of red-pilling, i.e., awakening to reality, uses anti-Semitic tropes. Red-pilling has been central to the alt-right to justify the conspiracy theory that Western culture is being destroyed, distorted, and manipulated by “diversity, inclusion, multiculturalism, and gender equality” (Stern 10). In the process of red-pilling and accepting the truth behind lies, white people are awakened to reclaim the West and prevent their minority status (Stern 17). We found that Gab users reflected these claims as they discussed their reality being denied and actively worked to refuse what they perceive as lies spread by mainstream outlets. Many Gab posts attributed the perceived lies of the mainstream media to the Jewish community, which shows one example of the prevalent anti-Semitic views across the platform.

Freedom of speech acts as a rallying point for right-leaning groups to mobilize supporters and as a way to build a community, as reflected in the #GabFam hashtag (Figure 5). Gab users find a community of like-minded individuals who share their feeling that their truths are being suppressed by mainstream media, and also mainstream social media such as Twitter. This finding is in conversation with Chun’s notion of homophilia, that people come together based on hatred towards others, in this case mainstream media. This form of community building is explained through the shared beliefs, values, and seeing the “true” reality, but is in fact driven by the hate of others. Another reason for like-mindedness that this study found was the high association of Gab users support for Trump (Figure 5). Other platforms like Twitter have been vital to the production and support of Trump's “fake news” campaign (Al-Rawi et al. 54), but further examination of Gab is important to learn the effects on this platform of spreading “fake news” and how this connects to the belief of Gab users who have been banned and suspended on mainstream platforms (Berger, “The Alt Right”). 

Trump's 2016 election was a transitional moment for the alt-right, and his strong support on Gab reflects this. Gab users reproduce the narrative of Trump’s 2016 presidential election campaign. They have made truth claims and produce “alternative knowledge” that challenge mainstream media narratives, and they also claim that mainstream media are fake news altogether. Many would have spread misinformation, which probably would get them suspended or banned on platforms like Twitter. This is concerning because this form of propaganda can prevent legitimate and democratic discourse to happen (Al-Rawi et al. 65). Considering that this discussion is already strained and dominated by pro-Trump supporters on mainstream platforms like Twitter, a space like Gab further encourages the growth of unsupported truth claims such as the denial of the Holocaust. Often, these “alternative truths” also support and justify narratives of racism, sexism, and other forms of oppression (Lewis). 

Concluding Thoughts

This study shows that, as a platform predicated on the idea of freedom of speech, Gab attracts users who espouse anti-Semitic sentiments and alt-right political ideologies and who question mainstream media on the platform. Freedom of speech is not politically neutral (Schradie 164). A space like Gab that emphasizes the importance of free speech allows for the alt-right to promote hate speech and extremist ideologies, as we have showed with its anti-Semitic ideology. This has the consequence that users no longer have to concern themselves with navigating boundaries and restrictions that would have possibly gotten them banned and suspended on platforms like Twitter. Instead, the platform can be used to share and produce knowledge that fits within the notion of red-pilling, which further justifies anti-Semitic and other ideologies of oppression. This study provides evidence that unchecked freedom of speech and the formation of community building through hashtags like #GabFam can generate an echo-chamber of extremist ideology that can further radicalize users, as shown by the Pittsburgh mass shooter who shared his manifesto on Gab prior to the attack (Roose, “On Gab”). Furthermore, we have also shown that Gab members use coded language rather than overtly racist language to express their beliefs. 

Recent events, including the 2020 American presidential election and the 2021 Capitol Hill Insurrection, call for more analysis of the alt-right and the various alternative social media platforms this movement uses to organize themselves within the US and globally. This study expands on previous studies on the Gab platform and demonstrates that a topic modelling analysis of the Gab data is useful in learning about the content of Gab and how this connects to the alt-right movement. Future research on the alt-right and the content they generate will benefit from examining the relationships between and roles of several platforms. We have seen that users move across platforms like Twitter, Gab, and Parler to navigate, plan for, and execute events, in-person attacks, and campaigns. Understanding the content of a singular platform is no longer sufficient as users move across spaces and adapt to the technologies, the possibilities, and the limitations of each platform to share and spread their ideological beliefs. Further analysis is needed to understand the alt-right modes of dissemination in order to generate possible interventions and solutions.

Works Cited 

Ahmed, Sara. The Cultural Politics of Emotion. Edinburgh University Press, 2014. 

Al-Rawi, Ahmed, Jacob Groshek, and Li Zhang. “What the Fake? Assessing the Extent of Networked Political Spamming and Bots in the Propagation of #fakenews on Twitter." Online Information Review, vol. 43, no. 1, 2019, pp. 53–71. doi.org10.1108/OIR-02-2018-0065.

Baumgartner. Jason. “Using the Pushshift API to Collect Gab Data.” 2019. docs.google.com/document/d/1aN7DZxluW4Ty884AwnNB-Nzk0FHOsqK4DIEo8hG4e4I/mobilebasic.

Berger, J. M. “The Alt-Right Twitter Census: Defining and Describing the Audience for Alt-Right Content on Twitter.” VOX-Pol Network of Excellence. 2018. www.voxpol.eu/download/vox-pol_publication/AltRightTwitterCensus.pdf.

Brown, Abram. “Dumped By Twitter, Trump Looks To Build Or Buy His Way Back Onto Social Media.” Forbes, 23 March 2021, forbes.com/sites/abrambrown/2021/03/23/trump-gab-parler-social-media-app-network/?sh=53455a541101. Accessed 31 March 2021.

Chun, Wendy Hui Kyong. “Queerying Homophily.” Pattern Discrimination, edited by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer, and Hito Steyerl, 2018, pp. 59–97.

Daniels, Jessie. “Racism in Modern Information and Communication Technologies.” Presentation at the International Convention on the Elimination of All Forms of Racial Discrimination, Geneva, Switzerland, 2019. academicworks.cuny.edu/hc_pubs/495/.

---. “The Algorithmic Rise of the ‘Alt-right.’” Contexts, vol. 17, no. 1, 2018, pp. 60–65. journals.sagepub.com/doi/abs/10.1177/1536504218766547.

---. White Lies: Race, Class, Gender and Sexuality in White Supremacist Discourse. Routledge, 2016.

Davis, Mark. "Transnationalising the Anti-Public Sphere: Australian Anti-Publics and Reactionary Online Media.” The Far-right in Contemporary Australia, edited by Mario Peucker and Debra Smith, Palgrave Macmillan, 2019, pp. 127–49.

“Gab Social.” Gab Social, Gab.com/

Goerzen, Matt, Elizabeth Anne Watkins, and Gabrielle Lim. “Entanglements and Exploits: Sociotechnical Security as an Analytic Framework.” 9th {USENIX} Workshop on Free and Open Communications on the Internet ({FOCI} 19), 2019. www.usenix.org/conference/foci19/presentation/goerzen.

Gökarıksel, Banu, Christopher Neubert, and Sara Smith. “Demographic Fever Dreams: Fragile Masculinity and Population Politics in the Rise of the Global Right.” Signs: Journal of Women in Culture and Society, vol. 44, no. 3, 2019, pp. 561–87. www.journals.uchicago.edu/doi/abs/10.1086/701154?mobileUi=0&.

Grimmer, Justin, and Brandon M. Stewart. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis, vol. 21, no. 3, 2013, pp. 267–97. www.cambridge.org/core/journals/political-analysis/article/text-as-data-the-promise-and-pitfalls-of-automatic-content-analysis-methods-for-political-texts/F7AAC8B2909441603FEB25C156448F20.

Hartzell, Stephanie L. “Alt-White: Conceptualizing the ‘Alt-Right’ as a Rhetorical Bridge between White Nationalism and Mainstream Public Discourse.” Journal of Contemporary Rhetoric, vol. 8, no. 1–2, 2018, pp. 6–25. contemporaryrhetoric.com/wp-content/uploads/2018/02/Hartzell8_1_2_2.pdf.

Heikkilä, Niko. “Online Antagonism of the Alt-Right in the 2016 Election.” European Journal of American Studies, vol. 12, no. 12-2, 2017. journals.openedition.org/ejas/12140.

“Holocaust Denial & The Big Lie.” Jewish Virtual Library,www.jewishvirtuallibrary.org/holocaust-denial-and-the-big-lie. Accessed 16 April 2021.

Kaplan, H. Roy. “Column: On the passing of Elie Wiesel.” Tampa Bay Times, 6 July 2016, www.tampabay.com/opinion/columns/column-on-the-passing-of-elie-wiesel/2284365/. Accessed 16 April 2021.

Lee, Moontae, and David Mimno. “Low-Dimensional Embeddings for Interpretable Anchor-based Topic Inference.” Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. aclanthology.org/D14-1138.

Lewis, Rebecca. Alternative Influence: Broadcasting the Reactionary Right on YouTube. Data & Society, vol. 18, 2018. datasociety.net/library/alternative-influence/.

Lima, Lucas, Julio C. S. Reis, Philipe Melo, Fabricio Murai, Leandro Araújo, Pantelis Vikatos, and Fabrício Benevenuto. “Inside the Right-leaning Echo Chambers: Characterizing Gab, an Unmoderated Social System.” 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, 2018. doi.org/10.1109/ASONAM.2018.8508809.

Marwick, Alice, and Rebecca Lewis. “Media Manipulation and Disinformation Online.”  Data & Society Research Institute, 2017. datasociety.net/library/media-manipulation-and-disinfo-online/.

Maulbetsch, Erik. “Colorado Republicans Buck, Boebert, Lamborn & Ganahl Keep Parler & Gab Accounts on the Down-Low.” Colorado Times Recorder, 23 March 2021, coloradotimesrecorder.com/2021/03/colorado-republicans-buck-boebert-lamborn-ganahl-keep-parler-gab-accounts-on-the-down-low/35098/. Accessed 31 March 2021.

McIlroy-Young, Reid, and Ashton Anderson. “From ‘Welcome New Gabbers’ to the Pittsburgh Synagogue Shooting: The Evolution of Gab.” Proceedings of the International AAAI Conference on Web and Social Media. vol. 13, 2019, pp. 651–54. aaai.org/ojs/index.php/ICWSM/article/view/3264.

Mimno, David, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. “Optimizing Semantic Coherence in Topic Models.” Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, 2011, pp. 262–272. dirichlet.net/pdf/mimno11optimizing.pdf.

Nadler, Anthony. “Populist Communication and Media Environments.” Sociology Compass, vol. 13, no. 8, 2019, p. e12718. doi.org/10.1111/soc4.12718.

Nelson, Laura K., et al. "The future of coding: A comparison of hand-coding and three types of computer-assisted text analysis methods." Sociological Methods & Research 50.1 (2021): 202-237. journals.sagepub.com/doi/pdf/10.1177/0049124118769114?casa_token=r5cQ-YdmMh4AAAAA:1hGyul24jDSrLJMOiyVePseQ2DFNVLESTXzFYzvrJZpRABuLugkxvUc-jrQODZByCtoi1CEd-mXiAA. Accessed 12 August 2021.

Roberts, Margaret E., Brandon M. Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner, Bethany Albertson, and David G. Rand. “Structural Topic Models for Open‐Ended Survey Responses.” American Journal of Political Science, vol. 58, no. 4, 2014, pp. 1064–82. onlinelibrary.wiley.com/doi/abs/10.1111/ajps.12103.

Roberts, Margaret E., Brandon M. Stewart, and Dustin Tingley. “Navigating the Local Modes of Big Data.” Computational Social Science, vol. 51, 2016. scholar.harvard.edu/files/dtingley/files/multimod.pdf.

Roose, Kevin. “On Gab, an EXTREMIST-FRIENDLY Site, Pittsburgh Shooting Suspect Aired His Hatred in Full.” The New York Times, The New York Times, 28 Oct. 2018, www.nytimes.com/2018/10/28/us/gab-robert-bowers-pittsburgh-synagogue-shootings.html.

---. “The Making of a YouTube Radical.” The New York Times. 8 June 2019. www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html

Rodriguez, M. Y., & Storer, H. (2020). “A computational social science perspective on qualitative data exploration: Using topic models for the descriptive analysis of social media data.” Journal of Technology in Human Services38(1), 54-86. www.tandfonline.com/doi/abs/10.1080/15228835.2019.1616350.

Rosenberg, Matthew and Maggie Haberman. “The Republican Embrace of QAnon Goes Far Beyond Trump.” The New York Times. 22 Aug. 2020. www.nytimes.com/2020/08/20/us/politics/qanon-trump-republicans.html.

Schradie, Jen. The revolution that wasn’t. Harvard University Press, 2019.

Schwemmer, Carsten. “Introduction to stminsights: A Shiny Application for Inspecting Structural Topic Models.” [Computer software manual] (R package version 0.1. 0), 2021. cran.r-project.org/web/packages/stminsights/vignettes/intro.html.

Stern, Alexandra Minna. Proud Boys and the White Ethnostate: How the Alt-right is Warping the American Imagination. Beacon Press, 2019.

Taddy, Matthew A. “On Estimation and Selection for Topic Models.” Artificial Intelligence and Statistics.  PMLR 22:1184-1193, 2012. proceedings.mlr.press/v22/taddy12/taddy12.pdf.

Thomas, Elise. “ASPI Explains: 8chan.” The Strategist. 2019, aspistrategist.org.au/aspi-explains-8chan/

Vidgen, Bertram. Tweeting Islamophobia. 2019. University of Oxford, PhD dissertation. ora.ox.ac.uk/objects/uuid:3c32d29d-e2e4-4913-abf8-2a28886f55a7.

Wallach, Hanna M., Iain Murray, Ruslan Salakhutdinov and David Mimno. “Evaluation Methods for Topic Models.” Proceedings of the 26th Annual International Conference on Machine Learning, 2009. dl.acm.org/doi/pdf/10.1145/1553374.1553515?casa_token=OFlE6UtHgoAAAAAA:L0pEYI-mkwxh1dlCw72AcA8eYUtCJvld2DijND7URPThUUMCqZ7QYRfBnvp1dJumpa7OplB1jk4kHQ.

“What Happened After My 13-Year-Old Son Joined the Alt-Right.” Washingtonian, 5 May 2019. www.washingtonian.com/2019/05/05/what-happened-after-my-13-year-old-son-joined-the-alt-right/.

Comments
0
comment
No comments here
Why not start the discussion?