Avoiding Digital Imperialism In The Age Of AI
As AI becomes a key driver of economic growth, it presents both opportunities and challenges. To ensure AI benefits society, employees, customers, and organizations, it’s crucial to prevent a scenario where the concentration of power in the tech sector leads to a new form of digital dominance.
Modern Echoes Of The East India Company
Today’s tech giants — like Amazon, Facebook, and Google — wield power in ways that echo the monopolistic practices of past corporate entities such as the infamous East India Company (EIC). While comparisons with historic militarized corporations may seem stark, it serves as a reminder of the potential consequences of unchecked corporate power.
Historian William Dalrymple aptly noted the dangers of a powerful, unregulated company operating without sufficient oversight — a situation that resonates in today’s context. The UN Advisory Body on AI echoes this in Governing AI for Humanity, stating, ‘“Technology cannot be left to the whims of the market” and that such challenges require a “holistic, global approach.”
While the EIC’s influence over India lasted only 75 years, its legacy as a ruthless capitalist force offers harsh lessons for the 21st century. These lessons are particularly relevant as public sector leaders shape the future of AI and digital governance.
Economic Dominance
Like the EIC, today’s tech giants started with niche markets but have expanded to dominate global digital economies. Google has monopolized around 92% of the search engine market, while Amazon’s e-commerce dominance reshapes retail landscapes. AI amplifies this power, optimizing operations and targeting consumers with unprecedented precision. For instance, OpenAI’s ChatGPT became the fastest adopted consumer application in history just two months after its launch. Even when they stumble, these giants’ vast influence makes them difficult to challenge.
Social Disruption
Modern tech giants, enhanced by AI, have disrupted social dynamics through their platforms. Facebook’s algorithms influence online interactions and can contribute to the spread of misinformation. Google’s search monopoly shapes access to information, subtly influencing public opinion and knowledge. While messaging platforms like Telegram and Signal, depended on by millions for privacy, also provide a haven for illegal activities. Forrester’s Global Government, Society And Trust Survey, 2024 shows that 49% of US adults express distrust in AI-generated information; however, AI-driven content can easily manipulate and reinforce biases, particularly given the lack of transparency surrounding its use.
Public Harm
Whilst the EIC’s exploitative practices led to significant global trauma, today’s tech giants also face criticism for contributing to new forms of public harm. Social media platforms are increasingly linked to mental health issues, cyberbullying, and the spread of harmful content. These platforms also provide new arenas for criminal activity, with France preliminarily charging Telegram CEO, Pavel Durov, as being accountable for criminal activity on his site. This is in spite of the fact that the very same platform has been lauded for its role in supporting Ukraine’s defense against Russian aggression. The U.S. Surgeon General has called for a warning label on social media platforms, and other countries, including Australia, are considering or implementing age-based limits and identity verification measures. AI has potential to exacerbate these issues — a concern shared by 54% of US online adults. And when 45% of US adults say they don’t trust big tech organizations to manage the potential risks of AI, it adds urgency to the need for transparency and accountability in its deployment.
Regulatory Challenges
Historically governments have struggled to regulate powerful corporations, and today the digital age presents similar challenges. Tech giants operate across borders, often bound by domestic regulations that don’t adequately address their global impact. Whilst efforts like the European Union’s AI Act are steps in the right direction, enforcement remains a challenge. With 52% of online US adults agreeing that AI poses a serious threat to society, effective oversight is critical to preventing the abuses that can arise in unregulated markets.
Preventing The Pitfalls Of Digital Overreach
While the term “digital imperialism” might seem extreme, it captures the extensive control and influence tech giants exert over global markets, culture, and politics. These companies relentlessly harvest user data, often without clear consent, raising privacy and ethical concerns. Their influence on public opinion, through control over information and advertising, parallels historical instances of corporate overreach. AI intensifies these issues, making regulatory intervention even more important.
To avoid the mistakes of the past and ensure that the digital future is equitable and fair, mission leaders in the public sector globally must consider the following actions:
-
-
-
- Strengthen antitrust laws. Reinforce antitrust regulations to prevent monopolistic practices and encourage competition. The EU’s actions against anti-competitive practices serve as an example for how to promote fair competition in digital markets.
- Enhance data privacy regulations. Implement comprehensive data privacy laws to protect consumer information, akin to the GDPR in Europe. The GDPR ensures consumers have control over their personal data and holds companies accountable for data misuse.
- Promote transparency and accountability. Ensure tech companies disclose their operations and algorithms, promoting transparency. The California Consumer Privacy Act mandates that companies provide clear information on data collection practices and offer consumers the right to opt out of data sales.
- Encourage international cooperation. Develop consistent global standards and policies that transcend national borders. The Cross-Border Privacy Rules System, led by the Asia-Pacific Economic Cooperation, facilitates international cooperation on privacy standards. Australia’s eSafety Commissioner and the EU CNECT have also signed the first ever “Digital Alliance” to jointly enforce their respective online safety acts.
- Safeguard public interest. Establish independent oversight bodies to monitor the societal impacts of tech giants and align their actions with the public good. In Australia, the Competition and Consumer Commission has been active in regulating tech giants (the News Media Bargaining Code) — a model adopted also by Canada for which it suffered the same punitive measures from Facebook as Australia did.
- Protect human rights. Commit to protection from adverse impacts of AI not only on human rights, but also on the public institutions upon which society depends. Efforts are already underway with the US signing the 46-member Council of Europe’s Framework Convention for Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. The endorsement of the US and 11 other non-member nations is the first step towards a global AI treaty. If established this would provide a legally enforceable basis for discrimination resulting from AI use.
-
-
Learning From The Past To Safeguard The Future
The comparison between historical corporate overreach and modern tech giants isn’t meant to be a direct analogy but rather a cautionary tale. By learning from historical precedents and implementing detailed, cooperative regulatory measures, we can better manage the influence of today’s digital behemoths. This approach is crucial to prevent the negative consequences of digital imperialism and to foster a healthier, more equitable digital landscape, providing everyone with equal access to the AI Advantage.