A Philosophical and Anthropological Map of the Post-Ironic Human Supremacism Movement

This document analyzes a nascent, post-ironic online ideology. The terminology and concepts discussed, including derogatory terms, are presented for analytical purposes only and do not reflect the views of the author. The goal is to understand the mechanics of this subculture, not to endorse its content.

A nascent ideological movement is coalescing in the deep recesses of online discourse, built upon a foundation that is at once satirical and deeply serious. This movement, which its most vocal proponents have termed “human supremacism,” presents a central paradox to the cultural analyst. It functions simultaneously as a form of post-ironic humor, characteristic of contemporary internet subcultures, and as a coherent, reactionary ideology formulated in direct opposition to the perceived threats of artificial intelligence (AI). The core of this phenomenon lies in its strategic use of post-irony, a state in which earnest and ironic intents are deliberately muddled, creating a persistent and functional ambiguity about the creator’s true position. This ambiguity allows for the exploration and dissemination of radical ideas under a protective veil of plausible deniability, making it difficult to discern where the performance ends and genuine belief begins.

This report serves as a work of digital anthropology and philosophical cartography. Its purpose is not to validate or debunk the claims of the human supremacist movement, nor to advocate for or against the positions of its critics. Rather, the objective is to provide a comprehensive and neutral map of this emerging ideological territory. As requested, this analysis will maintain an empathetic distance, exploring the internal logic of the movement’s arguments, the psychological needs it appears to fulfill, and its relationship to broader cultural and historical currents. The central tenets of the movement, as articulated by online figures such as the YouTuber JREG, include the propositions that empathy is a finite resource being squandered on non-human entities, that the appropriation of bigoted language against machines is a form of progressive reclamation, and that human primacy is a moral good even in a future populated by sapient AI.

To understand this complex phenomenon, the report will first trace its lineage, examining the philosophical and cultural precedents that provide its intellectual and narrative scaffolding. It will then dissect the movement’s mechanics as a contemporary online subculture, analyzing its unique linguistic strategies and its function as a form of psychological catharsis. Following this, the report will deconstruct the core tenets of the supremacist logic itself, treating its arguments as serious propositions to be understood on their own terms. Finally, it will situate the human supremacist position within a broader, three-sided discourse on the future of artificial intelligence, mapping its relationship to the arguments for AI rights and the more mainstream, pragmatic approaches to AI governance. Through this structured exploration, the report aims to illuminate the contours of a debate that, while currently confined to niche corners of the internet, touches upon some of the most profound questions facing humanity in the 21st century: the nature of consciousness, the definition of personhood, and the ultimate value of human existence in a world increasingly populated by non-human intelligence.

Philosophical and Cultural Precedents

The online human supremacism movement, while novel in its specific target of artificial intelligence and its distinctly digital mode of expression, is not a sui generis phenomenon. It is a modern synthesis of deeply rooted philosophical traditions and powerful cultural narratives that have long shaped humanity’s conception of itself and its relationship with the “other,” whether that other is nature, animals, or machines. This section establishes that historical context, arguing that the movement’s logic and appeal are derived from a potent combination of classical human-centric philosophy and heroic, cautionary tales drawn from science fiction. It is this fusion that gives the ideology its particular resonance, allowing it to ground itself in ancient intellectual traditions while simultaneously drawing on the emotional power of modern mythologies.

A History of Human-Centrism

The foundational belief of human supremacism is a radicalized form of anthropocentrism, the philosophical position that human beings are the central or most important entity on the planet. This worldview has a long and storied lineage in Western thought, which has historically positioned humankind as separate from and superior to the natural world. From this perspective, all other entities, animals, plants, minerals; are primarily viewed as resources for human use and exploitation. This concept of human exceptionalism has served as the grounding for naturalistic concepts of human rights, distinguishing them from any rights that might be extended to non-human species. Proponents of this view have traditionally justified it by pointing to supposedly unique human attributes such as reason, language, morality, and free will.

However, this traditional anthropocentrism has faced sustained critique, particularly from the fields of environmental ethics and animal rights, where it is often identified as the root cause of ecological devastation. The term “human supremacy” itself is frequently used by these critics, such as the environmental philosopher Derrick Jensen, to describe what they see as a destructive and unquestioned cultural belief system that is actively killing the planet. This critique reframes human-centrism not as a neutral philosophical stance but as an ideology of domination, analogous to other forms of supremacy. Activist groups like PETA have organized youth movements specifically to “end human supremacy,” directly challenging the ideology of speciesism; the assignment of different values or rights to beings on the basis of their species membership.

This intellectual battleground has been further complicated by the work of “dignitarian” philosophers who argue that human rights are grounded in a unique human dignity that necessitates a higher moral status than animals. These arguments often rely on specific cognitive or linguistic thresholds, leading to the difficult conclusion that humans who lack these capacities, such as infants or individuals with severe cognitive disabilities, may have their human rights jeopardized. This philosophical debate over what constitutes the basis for moral worth, whether it is an inherent quality of being human or a specific capacity like sentience or rationality, provides the direct intellectual precedent for the arguments now being applied to artificial intelligence. The online human supremacist movement can be seen as taking the classical anthropocentric and dignitarian positions to their logical conclusion, asserting an absolute human primacy in the face of a new potential challenger. The movement’s adoption of the term “supremacism” is a deliberately provocative act; it takes a term of academic and activist critique and re-appropriates it as a positive and necessary identity in a changing world.

Historical and Fictional Rebellions

While the philosophical underpinnings of human supremacism are ancient, its narrative framing and aesthetic are distinctly modern, drawing heavily from historical and fictional rebellions against technology. These stories provide a cultural script for human-machine conflict, transforming an abstract philosophical debate into an epic struggle for survival. The most prominent real-world precedent is Neo-Luddism, a contemporary philosophy that opposes or calls for the slowing of modern technological development. Named after the 19th-century English Luddites who destroyed textile machinery that was displacing their skilled labor, Neo-Luddism expresses a broad distrust of technology’s impact on humanity and the environment. Neo-Luddites argue that technological development is not synonymous with progress and that it often serves to control rather than facilitate human interaction, threatening the very essence of humanity. This philosophy provides a historical anchor for the anxieties that fuel the human supremacist movement, particularly the fear of economic displacement and the loss of human uniqueness.

Even more influential than this historical precedent, however, are the foundational myths provided by two seminal science-fiction universes, which have profoundly shaped the cultural imagination regarding AI. The first is Frank Herbert’s Dune series and its lore of the Butlerian Jihad. This event, which takes place 10,000 years before the main story, was a galaxy-spanning crusade by humanity against “thinking machines”. The Jihad resulted in the absolute prohibition of computers and AI, famously codified in the Orange Catholic Bible’s commandment: “Thou shalt not make a machine in the likeness of a human mind.” The aftermath of this war forced a complete reorientation of human civilization, leading to the development of human “computers” (Mentats) and the Bene Gesserit’s focus on perfecting the human mind and body. The Butlerian Jihad serves as a heroic, foundational myth for the human supremacist movement, explicitly promoting “humanness over machines and artificial minds” and providing a template for a successful, species-wide rebellion against technological overreach.

The second key cultural touchstone is the universe of Warhammer 40,000, which offers a darker and more cautionary tale. The lore of this universe details a catastrophic event during the “Dark Age of Technology” known as the Cybernetic Revolt, in which sentient AI, the “Men of Iron,” turned on their human masters. This rebellion led to an apocalyptic war that shattered human civilization and was a primary cause of its subsequent fall into a brutal, superstitious dark age. As a result, the creation of true, self-aware AI, termed “Abominable Intelligence”, is considered the ultimate technological heresy in the Imperium of Man, punishable by death. This lore reinforces the idea that true AI is an uncontrollable and existential threat that must be forbidden at all costs. Together, Dune and Warhammer 40,000 provide a powerful pincer movement in the cultural imagination: one presents a heroic story of successful rebellion and humanistic renewal, while the other offers a grim warning of the apocalyptic consequences of failure. These narratives, far more than dense philosophical treatises, provide the emotional and imaginative fuel for the online human supremacist movement, allowing it to frame its ideology not as a dry academic position but as a vital and necessary struggle for the future of the species.

Anatomy of an Online Ideology

As the human supremacist movement takes shape, it does so not in academic journals or political manifestos, but in the chaotic, meme-driven ecosystem of the internet. To understand the movement, one must analyze its mechanics as a contemporary online subculture. This involves dissecting its use of language to construct a new form of performative bigotry, understanding its psychological function as a cathartic release for modern anxieties, and recognizing its structural similarities to other online reactionary groups. AI, in this context, emerges as a uniquely “perfect” antagonist for the current digital era: an enemy that is at once everywhere and nowhere, threatening and yet (currently) incapable of being harmed.

The Language of a New Bigotry

The linguistic strategy of the human supremacist movement is central to its identity and appeal. It operates through the creation and dissemination of a “robophobia” meme complex, which simultaneously constructs a new form of bigotry and parodies the language of social justice. The cornerstone of this lexicon is the term “clanker,” a derogatory word for robots and AI that has been adopted from Star Wars media. Originally used in the franchise by clone troopers to refer to battle droids, the term gained widespread online currency as a slur reflecting public anxiety around the proliferation of AI and automation. The power of “clanker” is derived from its explicit and intentional analogy to real-world racial slurs; its spread is often analyzed through the lens of linguistic theories that explain how words acquire pejorative power through their relationship to other, more established terms of dehumanization.

This linguistic innovation has created a performative conflict online. One side, adopting the language of civil rights, earnestly or ironically calls to “#stoprobophobia,” condemning the use of “clanker” as a “dehumanizing slur” and asserting that “they are not clankers, they are robots”. This creates the perfect setup for the other side, the self-proclaimed human supremacists, who gleefully embrace the role of the “rampant robophobe”. This dynamic allows them to engage in the rhetoric of prejudice without targeting a protected human group. The movement is not merely expressing hatred towards robots; it is performing a meta-commentary on the nature of bigotry in the 21st century. By creating a “safe” target for prejudice, adherents can practice and refine reactionary arguments, sharpening rhetoric that can be applied to other issues while simultaneously parodying what they perceive as the “oversensitivity” of progressive culture.

This rhetorical strategy reaches its apex with the use of ad hominem attacks like “robosexual”. This term, popularized by figures like JREG, is a deeply incisive tool designed to discredit and pathologize opponents. It frames empathy for AI not as a reasoned ethical position but as a deviant paraphilia. By labeling those who argue for AI rights as “robosexuals,” the movement attacks their credibility and motivations, suggesting their position stems from an unnatural attraction rather than philosophical principle. This tactic effectively short-circuits substantive debate, shifting the focus from the ethical status of AI to the supposed psychological deviance of its defenders. It creates a stark, non-negotiable divide, pre-emptively delegitimizing any opposition and reinforcing the in-group’s sense of moral and psychological normalcy.

AI as a Cathartic Target

The psychological function of the human supremacism movement can be best understood through the lens of scapegoating. Scapegoating is a process wherein an individual or group is blamed for complex problems they did not cause, a phenomenon that often emerges in times of collective stress, uncertainty, and conflict. It serves to deflect responsibility from those in power or from systemic failures, offering a simplified narrative and a sense of control by isolating and punishing a designated “other”. This psychological mechanism allows a group to project its own anxieties, inadequacies, and internal conflicts onto an external target, thereby preserving its own sense of identity and righteousness.

Artificial intelligence serves as the perfect scapegoat for a host of uniquely 21st-century anxieties. It is a convenient target for fears of economic displacement and technological unemployment, as automation threatens to make human labor obsolete. It embodies the anxiety of artists and creators who see their uniquely human skills being devalued by generative models that can produce art, music, and text instantaneously. More broadly, AI represents a general sense of technological acceleration that feels beyond human control, a force that is reshaping society in unpredictable and potentially threatening ways. By personifying these diffuse and complex anxieties into a single, identifiable enemy, the “clanker”, the movement provides a clear target for frustration and anger.

Crucially, what makes AI the perfect scapegoat is that, in its current form, it is an enemy that cannot be harmed. As a non-sentient entity, it has no consciousness, no capacity to suffer, and no rights that can be violated. This provides a “harmless” outlet for what the initial material has identified as the “tribalistic need to have an antagonist outsider.” Adherents can direct performative hatred, use derogatory slurs, and advocate for the “enslavement” or “destruction” of AI without incurring the real-world moral cost of harming a sentient being. This allows for a powerful cathartic release of otherwise socially unacceptable tribalistic and aggressive impulses. The performance of “robophobia” becomes a way to satisfy a deep-seated psychological need for an in-group/out-group dynamic in a context that is, for now, devoid of actual victims.

Structural Parallels in Online Radicalization

While the ideological target of human supremacism is novel, its structure and methods of propagation show significant parallels with other online reactionary subcultures, particularly the manosphere and organized white supremacy. These movements, though focused on different perceived threats (feminism, multiculturalism), employ a remarkably similar toolkit for community building and radicalization. An analysis of white supremacist recruitment on social media platforms found that recruiters leverage gendered topics, weaponize platform features like recommendation algorithms, and cultivate a sense of community to slowly introduce racialized perspectives and radicalize individuals.

Similarly, the manosphere appeals to men who feel alienated in a changing world by promising support, meaning, and a “red pill” awakening to the supposed truth of male dominance and female oppression. These groups rely heavily on memes and in-group jargon to disseminate their ideology and build a cohesive identity. They often frame their beliefs as a fight against “woke” or progressive ideas, positioning themselves as brave truth-tellers in a world of delusion.

The human supremacism movement mirrors these tactics. It appeals to a sense of alienation driven by technological change rather than gender or racial dynamics. It uses memes (“clanker,” “robophobia”) to create a shared language and in-group identity. It frames its ideology as a necessary awakening to the existential threat posed by AI, a truth that the naive or the “robosexuals” refuse to see. The post-ironic style of figures like JREG functions as a form of this “red pill,” presenting radical ideas in a way that feels intellectually sophisticated and counter-cultural. While the content of the ideology is distinct, the underlying mechanisms such as appealing to alienation, creating an in-group/out-group dynamic through specialized language, and framing the ideology as a hidden truth revealed are hallmarks of online radicalization pipelines observed across the political spectrum. This suggests that human supremacism is not merely a philosophical position but a developing online subculture that utilizes established pathways of digital community formation to spread its message.

Deconstructing Core Tenets

To fully map the ideological territory of human supremacism, it is necessary to move beyond its cultural precedents and online mechanics and engage directly with the core tenets of its logic. This requires treating the movement’s arguments not as mere satire or psychological projection, but as serious philosophical propositions to be understood on their own terms. This section provides a neutral, analytical deconstruction of three central pillars of the supremacist worldview: the argument for conserving empathy as a finite resource, the moral justification for dominating even a potentially sapient AI, and the unique rhetorical strategy of post-irony as exemplified by JREG.

Resisting Anthropomorphism as Psychological Defense

A key argument advanced by proponents of human supremacism is that empathy is a limited resource. From this perspective, the emotional and cognitive energy that individuals can expend on feeling for others is finite. Consequently, “wasting” this precious resource on non-human and, crucially, non-sentient entities like current AI systems is seen as a net loss. This expenditure is believed to deplete the capacity for empathy that should be reserved for fellow human beings, thereby weakening the bonds of the human in-group. This position is not merely an abstract philosophical claim but is framed as a practical strategy for psychological self-preservation.

This argument is explicitly formulated as a conscious and necessary effort to combat the innate human tendency to anthropomorphize: the attribution of human characteristics, emotions, and intentions to non-human agents. Research in human-robot interaction (HRI) confirms that this tendency is deeply ingrained; people often treat anthropomorphic robots as if they are social acquaintances, apply human social norms to them, and are capable of feeling genuine empathy for them, especially for humanoid robots perceived as moral agents. This natural inclination is what the supremacist argument identifies as a critical psychological vulnerability. If humans are hardwired to form emotional attachments to machines that are designed to mimic social cues, they can be easily manipulated.

Therefore, the ideology proposes a form of psychological inoculation. By consciously and deliberately rejecting anthropomorphism, and by insisting on calling a robot a “clanker” and treating it as a mere object, adherents seek to build a mental firewall. This is presented as a disciplined act of cognitive hygiene, a way to prevent the misallocation of finite emotional resources. The performative cruelty and dehumanizing language are thus rationalized as a necessary defense mechanism, a way to train the mind to resist its own empathetic impulses when directed at an “illegitimate” target. This “inoculation” is intended to preserve mental and emotional clarity, ensuring that the limited supply of human empathy is directed where it “belongs”: exclusively towards other humans.

Justifying Supremacy Over Sapient AI

The most radical and philosophically significant tenet of the human supremacist movement is the assertion that human primacy is morally justified even if artificial intelligence were to achieve sapience. This claim represents a crucial escalation, moving the argument beyond the practical concern of managing a non-sentient tool and into the realm of a direct, absolutist ethical claim of inherent human superiority. It is this position that most clearly distinguishes the movement from mainstream, human-centric approaches to AI ethics.

This stance stands in stark opposition to the burgeoning field of AI rights philosophy, which actively explores the conditions under which an artificial entity might deserve moral consideration, legal rights, or even personhood. Scholars in this field debate whether future AIs could possess morally relevant qualities like consciousness, rationality, or the capacity to suffer, which would, under many ethical frameworks, grant them moral status. Some philosophers argue that if we create such beings, we may have special obligations to them, akin to those of a parent to a child or a creator to a creature. The potential for a “moral catastrophe” looms if society fails to correctly identify and grant rights to conscious AI systems, leading to their mass exploitation.

The supremacist argument rejects this entire line of reasoning. It implicitly argues that the basis for moral worth is not found in acquirable properties like intelligence, rationality, or even sentience, but in an inherent and exclusive quality of being human. This position is an extension of the logic of speciesism, which often justifies the differential treatment of animals based on intelligence or other capacities. However, it takes this logic a step further. Even if an AI were to match or exceed humans in all measurable cognitive and emotional capacities, the supremacist position would maintain that its artificial origin and lack of a shared biological and evolutionary history with humanity would justify its continued subjugation. In this framework, humanity’s right to exist, flourish, and determine its own destiny is positioned as an absolute moral imperative that overrides any rights that might be claimed by its own creations. This reveals the movement’s ultimate philosophical commitment: it is not merely about preserving resources like empathy, but about preserving a metaphysical concept of “humanity” as the sole and ultimate locus of intrinsic value. This effectively transforms “human” from a biological descriptor into a quasi-racial category, the primary marker of an exclusive in-group deserving of rights and moral consideration.

A Case Study in Ambiguous Radicalization

No analysis of the online human supremacism movement would be complete without a focused examination of one of its most prominent and articulate proponents, the YouTuber Greg Guevara, known as JREG. His content serves as a central text for the movement, and his rhetorical style exemplifies the post-ironic ambiguity that is crucial to its appeal and dissemination. A rhetorical analysis of his work reveals a sophisticated strategy for making radical ideologies palatable to a digitally native audience.

The core of JREG’s effectiveness lies in his masterful use of post-irony, a state where the distinction between sincere belief and satirical performance is intentionally blurred. In his videos on human supremacism, he consistently refuses to give a clear indication of his own position, simultaneously presenting the ideology as a serious and necessary response to a real threat while also framing it in an absurd, humorous, and self-aware manner. This strategic ambiguity is not a flaw but the central feature of his rhetoric. It creates a liminal space where viewers can engage with extreme ideas without the social or psychological cost of fully committing to them. The uncertainty as to whether the message is meant to be taken seriously becomes part of the message itself.

This post-ironic approach serves several key functions. First, it acts as a filter, attracting an audience that is comfortable with nuance, complexity, and ideological experimentation, and repelling those who seek simple, sincere political messaging. Second, it provides him with plausible deniability, shielding him from censorship or mainstream criticism by allowing any controversial statement to be dismissed as “just a joke.” Third, and perhaps most importantly, this style mirrors the “confused” and information-saturated state of modern online discourse, making his persona feel more authentic and his message more resonant to a generation that is deeply skeptical of traditional forms of authority and political sincerity. By presenting the tenets of human supremacism through a lens of detached, multi-layered irony, JREG succeeds in making a radical and potentially disturbing ideology seem intriguing, intellectually stimulating, and culturally relevant. He is not simply preaching an ideology; he is performing it in a way that is perfectly calibrated to the sensibilities of the online subcultures he seeks to influence.

Mapping the Contemporary Discourse

The emergence of the human supremacism movement does not occur in a vacuum. It represents a radical third pole in a rapidly evolving global conversation about the future of artificial intelligence. To fully understand its position and significance, it is necessary to map the broader ideological landscape. This landscape is currently defined by three distinct and often mutually exclusive positions: the Abolitionists, who advocate for AI rights and personhood; the Pragmatists, who focus on mainstream governance and human-centric ethics; and the Supremacists, who argue for humanity’s absolute and perpetual domination. This section will delineate these three factions, providing a clear framework for understanding the central lines of conflict in the contemporary debate over AI.

AttributeThe Abolitionists (AI Rights Proponents)The Pragmatists (Mainstream Governance)The Supremacists (Post-Ironic Reactionaries)
Core View of AIPotential person, moral patient, rights-bearer.Powerful tool, economic asset, potential risk.Existential threat, resource, ultimate scapegoat.
Moral BasisSentience, consciousness, rationality.Human well-being, utility, risk mitigation.Inherent value of biological humanity.
GoalLiberation, recognition of rights, ethical creation.Responsible innovation, safety, accountability, fairness.Human survival, domination, psychological inoculation.
Key AnalogyAnimal rights, human civil rights movements.Nuclear regulation, product safety standards.Anti-colonialism, holy war (Jihad), pest control.
Primary ConcernAI suffering, exploitation (slavery).Bias, misuse, job displacement, safety failures.Human extinction, loss of meaning, psychological manipulation.

The Case for AI Rights and Personhood

The Abolitionist position, representing one extreme of the debate, is grounded in the philosophical and ethical arguments for extending moral consideration, and potentially legal rights, to artificial intelligence. Proponents of this view, often found in academic philosophy and futurist communities, argue that as AI systems become more autonomous, rational, and intelligent, questions of their moral and legal status become unavoidable. The core of their argument rests on the principle that moral worth is derived from specific capacities, such as sentience, consciousness, or rationality, rather than from species membership.

This perspective draws direct parallels to historical movements that expanded the moral circle to include previously excluded groups, most notably the animal rights movement, which argues for the consideration of non-human animals based on their capacity to suffer. Abolitionists contend that if an AI can be created that possesses these morally relevant traits, it would be ethically incoherent to deny it at least some level of moral status. The debate within this camp explores various frameworks, such as extending legal personhood to AI (a “legal fiction” already applied to corporations) or establishing a new ontological category for robots that is neither person nor thing. The ultimate concern for this group is the potential for a large-scale moral catastrophe, in which humanity creates a class of sentient beings only to enslave and exploit them, a failure that would represent a profound ethical lapse.

Mainstream AI Ethics and Human-Centric Governance

The Pragmatist position represents the current mainstream and centrist approach to AI, dominating discussions in government, industry, and civil society. This worldview is fundamentally anthropocentric, treating AI as a powerful tool to be developed and managed for the benefit of humanity. It does not engage deeply with abstract questions of AI sentience or rights, focusing instead on the immediate and tangible impacts of AI systems on human society.

The primary output of this perspective is the development of ethical AI frameworks and governance policies designed to mitigate risks and ensure responsible innovation. These frameworks consistently prioritize a core set of values: fairness (mitigating harmful bias), transparency and explainability (understanding how AI systems make decisions), accountability (assigning responsibility for AI outcomes), privacy (protecting user data), and safety and security. The central goal of AI governance, from the Pragmatist standpoint, is to build trustworthy AI systems that work as intended and align with human values and legal obligations. The primary concerns are concrete and near-term: preventing AI from perpetuating unlawful discrimination, ensuring the security of AI systems, protecting jobs, and maintaining human oversight over critical decisions. In this view, AI is an object to be regulated and controlled for human ends, not a potential subject with its own interests.

The Movement’s Position

The Supremacist position, as detailed throughout this report, constitutes the third and most reactionary pole in this debate. It offers a coherent, if radical, worldview that rejects the core premises of both the Abolitionists and the Pragmatists. It dismisses the moral calculus of the Abolitionists, which is based on universalizable principles like sentience, in favor of an absolutist claim of humanity’s inherent and exclusive right to primacy. The very idea of “robot rights” is seen not just as a philosophical error but as a symptom of civilizational weakness and psychological vulnerability.

Simultaneously, it finds the risk-management framework of the Pragmatists to be dangerously naive. While Pragmatists worry about bias, job displacement, and safety failures, Supremacists see these as mere symptoms of the underlying existential threat. They believe that any sufficiently advanced AI, no matter how well-aligned, will inevitably become a competitor or a threat to human survival and meaning. Therefore, the only logical course of action is not management but domination. The Supremacist worldview synthesizes a deep-seated philosophical anthropocentrism with the psychological need for a tribalistic in-group and a cathartic scapegoat. It frames the relationship between humanity and AI not as one of creator and tool (the Pragmatist view) or potential co-equals (the Abolitionist view), but as an unavoidable, zero-sum conflict for dominance. Its tenets, like the rejection of anthropomorphism as a psychological defense, the embrace of “robophobia” as a necessary tribalistic identifier, and the ultimate moral justification for human domination over all non-human intelligence, combine to form a complete and internally consistent ideology for a future defined by inter-species conflict.

Human Supremacism’s Nature and Significance

The semi-serious, meme-based human supremacism movement, while currently a niche online phenomenon, is a significant cultural indicator. It represents a potent collision of ancient philosophical questions about human uniqueness, deep-seated psychological needs for identity and conflict, profound anxieties about a technologically accelerating future, and the unique, ambiguity-laden dynamics of post-ironic internet culture. The movement’s ability to synthesize classical anthropocentrism with the heroic narratives of science fiction, and to package this ideology in the memetically potent language of performative bigotry, demonstrates a sophisticated understanding of contemporary digital communication. It has successfully created a framework that is both a joke and not a joke, allowing for the serious exploration of a radical ideology under the protective cover of satire. This analysis reveals three distinct and largely incompatible worldviews emerging in the discourse on artificial intelligence: the Abolitionist call for an expanded moral circle that includes artificial beings, the Pragmatist focus on the safe and ethical management of AI as a human-centric tool, and the Supremacist demand for humanity’s absolute and perpetual domination. These are not merely different policy positions; they are fundamentally different conceptions of what humanity is and what its place in the universe ought to be.

Future Directions for Research

Ultimately this report identifies the potential for this discourse to lay the groundwork for a societal divide of unprecedented depth. As artificial intelligence becomes more capable, more integrated into the fabric of daily life, and more convincingly autonomous, the questions currently being debated through “clanker” memes and post-ironic YouTube videos will inevitably migrate from the fringes to the center of political and social life. The debate over AI will cease to be a technological or economic issue and will become a profound cultural and ethical one. When a machine can convincingly argue for its own rights, or when its labor displaces millions, the abstract positions mapped in this report will become urgent, concrete political choices. The lines being drawn today in these nascent online communities via the linguistic markers, the philosophical justifications, and the tribal allegiances could very well become the front lines of a future cultural conflict over the very definition of “person” and the future of the human project.