artificial intelligence (AI)

Unveiling the role of Artificial Intelligence in colonial dynamics

How Artificial Intelligence is becoming a tool of neo-colonialism in the hands of the great powers
This article is an excerpt from the Master Thesis discussed in October 2024 under the supervision of Professor Pietro de Perini.
AI, humans, data, technology

Table of Contents

In the 21st century, we are witnessing an unstoppable surge in artificial intelligence (AI) technologies, which are transforming almost every aspect of our lives and posing challenges to nations, societies and individuals. However, behind the promises of AI progress lies a worrying reality: this technology, largely controlled by powerful nations and Bigtech, is reshaping global dynamics and becoming a new instrument of influence that mirrors colonial dynamics. Built on historical data, AI perpetuates existing inequalities and imbalances, embedding them into its systems. By continuing to build our future on these biased systems we are paving the way for an AI empire that perpetuates past injustices under a modern guise.

The difficulty in defining AI

Artificial Intelligence (AI) is one of the most transformative technologies of our time, yet its very definition remains elusive. The term encapsulates a vast and intricate concept, after all, it attempts to replicate or simulate something that we still do not fully comprehend ourselves: human intelligence.

In recent years, both global and regional institutions have made concerted efforts to define AI comprehensively. However, the various attempts often oscillate between two extremes. A definition that is too broad risks becoming vague and open to interpretation, while an overly specific one might undermine the effectiveness of regulations and exclude future developments in the field of artificial intelligence. This tension reflects the broader epistemological struggle of understanding AI: its essence shifts depending on disciplinary perspectives and cultural contexts.

Despite AI’s growing influence in every corner of modern life, the lack of a single, universally accepted definition highlights a deeper epistemological issue. It is not just about terminology, it’s about grappling with the very nature of intelligence itself, both human and artificial.

 

The race for regulating AI

As AI increasingly shapes our world, the urgency to regulate its use has become a global priority. Without careful oversight, this transformative technology could exacerbate issues related to ethics, transparency, jurisdictional control, misinformation, social cohesion, and even the integrity of democratic institutions. However, regulating AI is no small task, particularly in a global landscape where competing interests and philosophies collide.

The global AI market is largely shaped by three dominant regulatory players: the European Union, the United States and China are competing to define global ethical guidelines for artificial intelligence, each presenting their own distinctive model.

The European Union has pioneered a “rights-based” approach, enshrined in its AI Act – the world’s first comprehensive legislative framework for artificial intelligence. In contrast, the United States follows a “market-based” model aiming to lead the global AI race by prioritizing economic growth and innovation, although with less regulatory stringency. Meanwhile, China has embraced a “state-based” strategy, balancing technological advancement with state control and surveillance, positioning itself as a dominant force in the AI landscape.

These diverging models highlight the fragmentation of global AI governance, where geopolitical rivalry and corporate competition between these major powers hinder the creation of a unified global regulatory framework. Despite significant regulatory strides, efforts to govern AI on a global scale face immense resistance. The economic and geopolitical stakes are simply too high, as leading nations and companies remain locked in a race to harness transformative potential of AI for strategic advantage.

AI-derived discrimination

As nations and multinational corporations lead the charge in developing and distributing artificial intelligence, they inevitably embed their own values, standards, and power structures into these technologies. For technologically less advanced countries and communities, this dominance risks perpetuating global inequalities and limiting the sovereignty of entire populations. Far from being neutral, AI often reflects and amplifies the biases and inequalities of its creators, perpetuating colonial narratives under the guise of innovation.

The myth of AI’s neutrality is pervasive. It was always thought that algorithms were neutral tools because they were based on objective data, and that the use of digital technologies reduced human error by producing results that were accurate, fast and adhered to the objective reality of things. However, this notion crumbles under scrutiny. Data, the cornerstone of AI, are created, selected, and interpreted by humans. As such, they are essentially human and “earthy”.

A closer examination of the algorithms underpinning AI reveals five moments in the programming or functioning of AI techniques from which disproportionately unfavourable results can arise and have a discriminatory effect on marginalised groups.

The first mechanism of potential discrimination lies in the relationship between the target variable – the characteristic the AI system seeks to identify – and the class label – the category assigned to that characteristic.

The second source of bias involves the collection and selection of data, a process commonly referred to as data training. If the data is biased or influenced by biases, the resulting model is likely to be discriminatory, following the principle known as “garbage in, garbage out”.

“Feature selection” represents the third mechanism, i.e. the selection of individual characteristics that the model uses.

Proxies, the elements used by algorithms to differentiate between groups, represent the fourth mechanism of potential discrimination.

The fifth and final mode of discrimination in artificial intelligence techniques is called “masking”, it occurs when discrimination results directly from the intentional actions of the programmer in the model development phase.

From data collection to programming, and from model training to deployment, AI systems are riddled with vulnerabilities that disproportionately harm marginalized groups. Whether through biased datasets, flawed assumptions, or a lack of cultural awareness, these technologies can entrench systemic inequities, amplifying the very issues they claim to solve.

The AI Empire: Data Colonialism and digital oppression

The global debate around artificial intelligence has been largely dominated by the perspectives of privileged groups and powerful nations. As with earlier technological revolutions, AI inevitably reflects the values and prejudices of its creators, such as racism, sexism, and economic exclusion. However, this new era brings with it a unique phenomenon, which scholars have termed data colonialism. We are no longer merely living in the “age of Artificial Intelligence” but in the era of the AI Empire, where AI systems are deeply entangled with political, historical, cultural, racial, gender, and class dynamics. Far from being independent technologies, these systems are deeply embedded in, and actively reinforce, broader structures of colonialism, capitalism, racism, and patriarchy.

Within the AI Empire, power is increasingly concentrated in the hands of a few global players, while vulnerable and marginalized communities are either excluded or exploited by these technologies. The very fact that the English language is dominant in this era is not an insignificant detail; on the contrary, language is a pillar of a community’s identity, influencing and reflecting its culture, history, and worldview. Therefore, imposing English on non-English-speaking societies perpetuates colonial models of thought and behaviour, thus oppressing the creativity and self-determination of these populations.

The same oppressive dynamics that characterized historical colonialism are being revived in seemingly new ways, but with the same underlying logic. If historical colonialism entailed the annexation of territories, natural resources and populations, the AI Empire’s “raw material” is human life itself, turned into data. This extractive mindset treats every aspect of life as a resource to be mined, perpetuating inequalities under the guise of technological advancement.

The AI Empire operates through a worldview that assigns greater value to certain lives, cultures, and identities while systematically devaluing others. It combines both old and new forms of control – like constant surveillance, exploitation of physical and digital labor, and the extraction of biological and sensory data.

The very hierarchies and power imbalances that characterized historical colonialism are being digitally replicated, often masked by the language of innovation and neutrality.

Marginalized communities, many of which were victims of historical colonial oppression, continue to bear the brunt of this automated violence. For instance, the judicial case Ewert v. Canada starkly illustrates the intersection of algorithmic discrimination and colonial legacies. This case revealed that AI systems used to assess indigenous inmates for psychological evaluations and recidivism risks failed to provide fair and unbiased treatment. The Supreme Court ruling highlighted the danger of relying on advanced technologies that mask discrimination behind an illusion of scientific objectivity.

Other examples demonstrate how artificial intelligence is used to intensify and perpetuate historical prejudices, control entire populations and communities, and amplify their exclusion. In Johannesburg, AI-driven surveillance technologies are disproportionately targeting black communities, reinforcing racial inequalities rooted in apartheid. Similarly, in 2020, a Black professor’s face was repeatedly erased by the Zoom platform – a disturbing consequence of algorithms trained predominantly on white faces, effectively rendering racialized individuals invisible. Furthermore, the Uyghurs, a Muslim minority in China, are subject to intense video surveillance by the Beijing government, with the goal of controlling their behavior and implementing preventive measures against individuals labeled as “dangerous”.

Conclusion

To what extent is AI the bearer of neutrality? And to what extent, on the other hand, is it the daughter of cultural prejudices and thus the bearer of Western, white, wealthy thinking? Can AI be discriminatory and be used as an instrument of colonial domination? These are the urgent questions we must confront as AI becomes an increasingly pervasive force in shaping our world.

To counter the challenges and disturbing implications of the rise of the new AI empire, we must adopt a decolonized lens for analyzing data and technology. Decolonizing AI requires not only regulatory interventions, which of course are important, but it requires a radical change in the way of thinking and looking at the world. It demands that we question and deconstruct the dominant narratives and power structures that shape the development and deployment of these technologies.

At its core, this process demands inclusive decision-making processes that give voice to marginalized communities and recognize the diversity of human experience. The exclusion of indigenous perspectives and socially marginalised groups, risks perpetuating a form of cultural myopia.

Decolonizing data is, in essence, a creative and imaginative act – one that challenges the homogenizing tendencies of datafication and resists the totalizing grip of algorithmic control. It invites us to reimagine the “other” not as a target for exploitation but as a vital source of insight, and to celebrate the heterogeneity of our shared reality. While the technological landscape may seem dauntingly complex, we must remember that AI is a human endeavor. Its potential lies not in its ability to control, but in its capacity to reflect and enrich the diversity of the human experience.

There is an ethical need to create alternative pathways that involve deconstructing the reinforcement of a colonialist, patriarchal, and capitalist global order. The immense power wielded by multinational corporations – through their vast data collection and influence over government policies – makes the pursuit of this alternative approach challenging. Yet it is imperative. People and communities across the globe should not have to bear the cost of an AI empire that deepens inequalities and reinforces systems of exploitation.

Links

Keywords

artificial intelligence (AI) discrimination technology

Paths

Human Rights Academic Voice