In November 2022, OpenAI, a frontrunner in the field of artificial intelligence (AI), launched ChatGPT. This release was more than just another advancement in machine learning and large language models; it marked a significant breakthrough in machines being able to generate text strikingly similar to that of humans. That same year, the global political landscape underwent its own seismic shift: For the first time in decades, closed autocracies outnumbered liberal democracies, reverting the state of global democracy to levels not seen since 1986. While ChatGPT and other breakthroughs in generative AI are not directly responsible for this downward trend, the swift adoption of these technologies in politics has raised alarm among experts, policymakers, and the public about AI’s potential to further erode democracy and accelerate a shift towards autocracy. As we find ourselves in the midst of what is considered the biggest election year in history, spanning 76 countries, including 43 democracies, and observing AI’s first significant use in political campaigning, it is crucial to map out and thoroughly explore the various ways this emerging technology could harm and change democracy.


Today’s AI is groundbreaking not because it follows a rigid set of rules, but because it uses “neural networks.” These sophisticated machine learning algorithms draw inspiration from the structure and function of the human brain, empowering machines to learn from massive datasets. From a sociotechnical standpoint, however, AI stands out for a different reason. Riding on the back of prior digital transformations that have allowed nearly every aspect of our lives to be converted into machine-readable data, AI now has the power to create hyper-personalized and targeted output at a scale. From custom online shopping recommendations to tailor-made political advertisements, this output has the potential to influence and manipulate human behavior with unprecedented accuracy. Consequently, AI has emerged as the first technology in history capable of shaping societies in desired directions by exerting influence at their most micro-level.


This unique ability makes AI an essential asset for those aiming to secure and hold political power, with few politicians and governments likely to forgo its advantages for personal and collective gains. The same applies to the economic sector, where the pursuit of profit drives an unrelenting push to develop and adopt AI. However, as various actors in both the political and economic spheres exploit these micro-level capabilities of AI, they risk changing the very fabric of liberal democracy by triggering profound structural changes. Two are particularly noteworthy.


The first one is the emergence of IT companies as a quasi-governing class. Digital technologies, including AI, have permeated our societies primarily through commercial channels, enabling a select few corporations and individuals to amass unprecedented wealth and, importantly, to assume de facto roles as arbiters in our social, political, and cultural exchanges through their ownership of social media platforms. These developments pose problems for at least two reasons. Firstly, the power IT companies have gained in the public sphere is not subject to democratic oversight and lacks public legitimacy, allowing them to consistently prioritize consumer interests over civic values. Secondly, and relatedly, this development has led to the increasing commodification of political discourse and a blurring of the boundaries between citizens and consumers.


The second change in democracy driven by AI centers on governmental adoption of techniques such as demos scraping. This practice provides authorities with a lens into citizens’ real-time preferences by analyzing “digital footprints” they leave via activities like social media engagement and internet browsing. Governments can use this data to make decisions they believe reflect strong public approval. However, techniques like these could significantly alter the relationship between governments and citizens. They risk transforming democracy from a system based on debate, negotiation, and consensus-building into a hyper-technocratic regime focused on data-driven preference formation and decision-making. A parallel trend is emerging in the realms of justice and policing, and it is a cause for concern due to its potential to perpetuate historical racial and gender biases.


Beyond the structural shifts that can undermine democracy, we can also delve into the particular harm that AI might inflict on democratic processes. Specifically, we should be mindful of rights-based and systemic harm. Rights-based harm arises when AI and other digital technologies are used to obstruct free democratic participation of specific individuals or social groups. Most common tactics are surveillance, illicit data collection, profiling, targeted messaging, or the use of people’s likeness to create disparaging deep fakes. Such actions can tarnish the reputation of individuals and groups and disqualify them as equal participants in the democratic arena. In contrast, systemic harm affects the broader political and social framework, and it refers to AI being used to disrupt democratic discussion. This type of harm manifests in increased societal polarization, fragmentation, distrust, and a general sense of apathy. Central here is the spread of misinformation and fake news, both expected to be turbocharged by recent advancements in generative AI, and both likely to exacerbate autocratic tendencies in democratic societies.


This list does not include all the ways AI could affect democracy. There are many more. And as AI becomes a bigger part of our lives, it also introduces challenges and changes that extend beyond democracy. A global regulatory framework is gradually emerging in response. Governments and international organizations such as G7, G20, UNESCO, OECD, and the Council of Europe, are all developing documents to promote “human-centric AI,” “trustworthy AI,” and “ethical AI.” The EU has gone the furthest, with the European Parliament’s recent adoption of the landmark Artificial Intelligence Act. However, regulation alone may not provide a bulletproof solution against the challenges of AI. On one hand, political, geopolitical, and economic interests are likely to limit its scope and effectiveness. On the other hand, measures outlined in regulations like the EU’s AI Act  (prohibition, transparency, and education) may not be suited for addressing AI systemic harm like polarization and apathy. In fact, these measures themselves could worsen these problems, as they may increase citizen awareness of the various ways AI could cause harm. With all this in mind, it is imperative that we focus on AI as a force that can erode democracy, alongside and in interaction with the challenges posed by rising populism, authoritarianism, and right-wing politics.


Jelena Cupać is a post-doctoral research fellow in the Global Governance Unit of the WZB Berlin Social Science Center. In addition, she has worked as a lecturer at Potsdam University and Free University. She holds a Ph.D. from the European University Institute in Florence and an MA from the Central European University in Budapest. In her research, she has explored a variety of topics: democratic backsliding in the Western Balkan countries, the post-Cold War evolution of international security organizations (NATO, the OSCE, and the UN), global governance of artificial intelligence (AI), and, most recently, the global backlash against women’s rights, with particular focus on the contestation unfolding in the UN. Her work has appeared in International Affairs, The British Journal of Politics and International Relations, Journal of Regional Security, Global Constitutionalism and Democratization. Her article “Backlash advocacy and NGO polarization over women’s rights in the United Nations” is the winner of the 2022 International Affairs Early Career Prize.


Photo credit: Wikimedia Commons


Please enter your comment!
Please enter your name here