Governing the Digital Ballot: AI, Sovereignty, and the Future of Electoral Democracy
- Nicci-Maree Wolski
- 24 mag
- Tempo di lettura: 9 min
The emergence of artificial intelligence (AI) as a political tool has fundamentally altered federal elections, raising new challenges for democratic governance, sovereignty, and international cooperation. There has been a growing integration of AI into political processes in order to enhance efficiency and improve democracy, however, federal elections face manipulation by AI tools, both domestic and foreign. This will explore the geopolitical, legal, and corporate implications of AI in elections focusing on the domestic and international implications and the need for regulation. AI is disrupting democratic norms and sovereignty and demands new international legal and regulatory frameworks involving both states and non-state actors.
AI in Federal Elections: Tools and Tactics
Artificial intelligence (AI) is increasingly employed in electoral contexts, from voter microtargeting and automated political messaging to generating deepfakes and synthetic media. Generative AI significantly impacts elections by disseminating misinformation, often through realistic deepfakes of electoral candidates. A notable example is the 2024 Indonesian elections, where Prabowo Subianto, Indonesia then Defence Minister, leveraged AI-generated content on TikTok for their Chief to appeal to younger voters (Regan, 2024). The widespread availability of generative AI enables interference not only by governments but also by the general public, primarily facilitated through social media platforms.
Social media, particularly X (formerly Twitter), exemplifies this phenomenon. CEO Elon Musk notably shared AI-generated images portraying U.S. Vice President Kamala Harris negatively, influencing voter perceptions. One image, depicting Harris as a communist dictator, attracted over 80 million views, highlighting AI's potential impact on voter attitudes and election outcomes (Deutsche Welle, 2024). Such hyper-realistic AI-generated content blurs distinctions between reality and fiction, undermining public trust.
Moreover, AI-driven microtargeting on social media amplifies personalised political messaging. Research published in PNAS Nexus (2024) indicates that personalised advertisements, tailored using large language models, are more effective at influencing voter behaviour compared to standard advertisements. Even small shifts in voter preferences, when scaled across large populations, can decisively alter election outcomes. Importantly, these tactics do not violate current usage policies of platforms like OpenAI, which lack built-in safeguards against such practices (Simchon, Edwards, & Lewandowsky, 2024).
This growing intersection of AI and political manipulation raises profound ethical and political concerns, highlighting vulnerabilities in democratic systems amidst rapid technological advancement.
Geopolitical Implications, Global Power Struggles & AI Governance
States increasingly weaponise artificial intelligence (AI) in electoral interference, turning democratic processes into arenas for proxy conflicts and strategic influence operations. AI-driven disinformation has become a prominent tool in hybrid warfare, enhancing realism and effectiveness in foreign electoral manipulation. For instance, the NSA reported foreign hackers and propagandists employing AI-generated content to appear convincingly fluent in English (Collins, 2024). Like domestic interference, international interference exploits generative AI across text, images, audio, and video, fostering widespread distrust in democratic institutions.
Moreover, as AI emerges as a tool of geopolitical leverage, states compete to establish global norms for its use, reflecting broader struggles over digital sovereignty and informational dominance. The U.S.-China technological rivalry, amplified by advancements in AI, has implications beyond technology, affecting governance, security, international relations, and economic strategies (Trend Research & Advisory, 2024). AI's rapid evolution compels states to quickly adapt, reshaping global power dynamics and geopolitical trajectories.
Efforts to manage AI-related uncertainties include regulatory initiatives like the European Union's AI Act, designed to ensure AI systems remain safe, transparent, traceable, non-discriminatory, environmentally sustainable, and human-supervised. The EU Parliament also seeks a comprehensive definition applicable to future AI developments and established a dedicated working group to oversee compliance while fostering digital sector growth (European Parliament, 2023).
The EU's groundbreaking AI Act raises questions about the future direction of global AI regulation, specifically regarding centralisation versus fragmentation. Centralised regulation offers efficiency and consolidated political power but depends significantly on institutional design quality. Poorly designed centralisation could potentially exacerbate existing issues, making fragmentation a more immediate and practical outcome for the foreseeable future (Franke, 2024).
International Legal Dimensions
The deployment of artificial intelligence (AI) by foreign actors to influence or disrupt elections raises significant concerns regarding state sovereignty and international law's principle of non-intervention. AI-driven interference potentially breaches key provisions of the UN Charter, particularly Article 18, which guarantees freedom of thought, and Article 20, which prohibits disinformation and incitement to violence. AI-generated content can severely impact individuals' ability to form independent opinions due to the rapid dissemination of disinformation. Additionally, AI-driven disinformation may foster discrimination, violence, or hostility toward specific groups (Centre for AI and Digital Policy, n.d.). The increasing prevalence and sophistication of AI-generated disinformation, particularly concerning electoral processes, necessitates examining whether such actions constitute violations of national sovereignty under international law.
Difficulties in Accountability and International Legislation
Holding state and non-state actors accountable for AI-driven election interference remains challenging due to unresolved legal and technical attribution issues within current international frameworks. This difficulty, known as the 'responsibility gap,' arises when AI-generated actions are difficult to attribute clearly to individuals or states. Andreas Matthias (2004) highlighted this gap, explaining that AI systems capable of learning autonomously create scenarios where human accountability, requiring knowledge and control, becomes nearly impossible.
Santoni de Sio and Mecacci (2021) further specify four accountability gaps:
1. Culpability Gap: Difficulty assigning responsibility due to AI system complexity and unpredictability.
2. Moral Accountability Gap: Challenges in explaining or justifying AI-influenced decisions, driven by the opaque nature of these systems.
3. Public Accountability Gap: Difficulty in holding governments accountable for opaque AI-driven decisions, especially when such systems are developed by private entities, reducing transparency and public scrutiny.
4. Active Responsibility Gap: A lack of ethical awareness or motivation among AI developers and users, hindering proactive harm prevention.
These accountability gaps complicate international legislative efforts, especially given the diverse range of actors involved, from states to individuals and corporations.
Technology companies play a significant role in AI-enabled electoral interference through generative tools, microtargeting, and unregulated platforms such as Meta, X, and TikTok, where misinformation thrives. Algorithmic content curation further blurs reality, complicating accountability. International law traditionally regulates state behaviour, limiting its effectiveness in addressing corporate responsibility without explicit state consent or treaty-based obligations. The principle of state sovereignty and reliance on voluntary "soft law" mechanisms further weaken enforceability (Adeyeye, 2007).
While frameworks like the UN Guiding Principles offer foundational guidance, their voluntary nature undermines accountability. Proposed solutions include binding mechanisms, such as the Business and Human Rights treaty, extraterritorial regulations exemplified by the EU Digital Services Act, and extending existing international legal frameworks to include corporate liability, potentially through institutions like the International Criminal Court (Deva & Bilchitz, 2017; Bradford, 2020; Clapham, 2006). A combined approach using treaty law, extraterritorial regulation, and international legal reform is essential to effectively address corporate accountability in politically sensitive contexts (Ruggie, 2013; Cassel, 2016).
Toward Effective Regulation and Governance
In response to growing concerns over artificial intelligence (AI) in electoral contexts, various states have implemented domestic regulatory measures aimed at mitigating associated risks. However, these efforts remain fragmented and inconsistent. For instance, the U.S. Federal Election Commission (FEC) offers limited guidelines focused mainly on transparency in political advertisements utilising AI (Goodman, 2023). France and Canada have introduced more comprehensive transparency requirements for online political content featuring AI-generated elements (Cadwalladr & Graham-Harrison, 2020). California’s AB-730 specifically mandates disclosure of synthetic media, such as deepfakes, within political campaigns close to election periods (West, Allen, & Horowitz, 2021). These varied domestic approaches underscore the need for harmonisation across jurisdictions (Kuner et al., 2022).
At the international level, multilateral bodies recognise national regulatory limitations and advocate for broader governance frameworks. While binding treaties are absent, several soft law instruments have emerged. The United Nations' Global Digital Compact proposes principles for transparent and accountable use of technologies in democratic processes (UN Secretary-General, 2023). Likewise, the G7’s Hiroshima AI Process and OECD’s AI Principles emphasise risk-based governance approaches focusing on accountability and robustness (OECD, 2019; G7, 2023).
Scholars propose a binding "AI Geneva Convention" to regulate AI in politically sensitive contexts, particularly elections, establishing enforceable principles and sanctions for interference (Crootof, 2021). Given the cross-sector, global nature of digital ecosystems, purely state-driven regulations are inadequate. Effective governance requires multi-stakeholder collaboration involving states, technology companies, civil society, academia, and international organisations. Initiatives such as the Partnership on AI and Internet Governance Forum demonstrate the effectiveness of cross-sector collaboration through practical measures like standardised disclosures, watermarking synthetic media, and rapid-response mechanisms to electoral manipulation (Cath, 2018; Floridi et al., 2018). Democratic resilience ultimately depends on these networks of shared responsibility and cooperative governance.
Recommendations & Future Direction
Addressing the growing threat of artificial intelligence (AI) to electoral integrity requires a coordinated, global, and multidisciplinary approach. An international legal framework, similar to the Geneva Conventions, should be developed through inclusive dialogues involving states, multilateral organisations, and civil society, establishing principles of transparency, fairness, and non-interference (Crootof, 2021). Clear mechanisms for attribution and accountability are critical, necessitating advances in explainable AI and global traceability standards to accurately assign responsibility for interference (Doshi-Velez & Kim, 2017). Companies deploying electoral AI technologies must conduct comprehensive human rights and democracy impact assessments, institutionalizing corporate accountability akin to environmental regulatory models (Wagner, 2022). Supporting capacity-building efforts for electoral bodies, particularly in the Global South, through technical assistance, funding, and training facilitated by international organisations like the UNDP and the Carter Centre, is essential (Levitsky & Way, 2015). Additionally, promoting AI literacy and voter resilience through education campaigns can empower citizens to critically evaluate AI-generated content, reducing their vulnerability to manipulation and enhancing democratic participation (Guess, Nyhan, & Reifler, 2020).
Conclusion
The increasing use of artificial intelligence (AI) in federal elections presents significant risks, including disinformation, foreign interference, voter manipulation through microtargeting, and deepfakes. These issues highlight critical unresolved legal challenges concerning attribution, accountability, and international law's current limitations in regulating state and non-state actors. The integrity of electoral democracy depends on effectively holding governments and corporations accountable for deploying or enabling AI technologies that threaten democratic processes. Without clear legal frameworks and enforceable obligations, public trust in electoral systems risks further erosion. Reaffirming democratic norms and creating robust, enforceable legal frameworks for AI governance is now an urgent necessity. Transparency, accountability, and international cooperation are essential to safeguarding democracy in an increasingly digital world.
Bibliography
Adeyeye, A. (2007). Corporate responsibility in international law: Which way to go? Singapore Year Book of International Law, 11, 141–161.
Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press.
Cadwalladr, C., & Graham-Harrison, E. (2020, February 8). Revealed: The Facebook loophole that lets world leaders deceive and harass their citizens. The Guardian. https://www.theguardian.com/technology/2020/feb/08/facebook-loophole-world-leaders-deceive-harass-citizens
Cassel, D. (2016). Outlining the case for a common law of international corporate responsibility. Northwestern Journal of International Law & Business, 36(3), 303–338.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
Center for AI and Digital Policy. (n.d.). United Nations resources on AI policy. https://www.caidp.org/resources/united-nations/
Clapham, A. (2006). Human rights obligations of non-state actors. Oxford University Press.
Collins, B. (2024, February 23). Russia, Iran and China are using AI in election interference efforts, U.S. intelligence says. NBC News. https://www.nbcnews.com/tech/security/russia-iran-china-are-using-ai-election-interference-efforts-us-intell-rcna172476
Crootof, R. (2021). A Geneva Convention for AI? Vanderbilt Journal of Transnational Law, 54(2), 351–402.
Deutsche Welle. (2024). Fact check: How Elon Musk is spreading US election lies. https://www.dw.com/en/fact-check-how-elon-musk-is-spreading-us-election-lies/a-70663408
Deva, S., & Bilchitz, D. (Eds.). (2017). Building a treaty on business and human rights: Context and contours. Cambridge University Press.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv. https://arxiv.org/abs/1702.08608
European Parliament. (2023). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
Franke, U. E. (2024). AI governance as a global challenge. Global Policy, 15(1), 83–92. https://doi.org/10.1111/1758-5899.12890
G7. (2023). G7 Leaders’ Hiroshima Vision for AI. G7 Summit. https://www.g7hiroshima.go.jp/en/documents/
Goodman, E. P. (2023). A political deepfake dilemma. Journal of National Security Law & Policy, 13(1), 89–124.
Guess, A., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 U.S. election. Nature Human Behaviour, 4, 472–480. https://doi.org/10.1038/s41562-020-0833-x
Kuner, C., Marelli, M., & Greenleaf, G. (2022). Data protection law and international transfers of personal data: A European view. International Data Privacy Law, 12(1), 1–18. https://doi.org/10.1093/idpl/ipab027
Levitsky, S., & Way, L. A. (2015). The myth of democratic recession. Journal of Democracy, 26(1), 45–58. https://doi.org/10.1353/jod.2015.0007
OECD. (2019). OECD principles on artificial intelligence. https://www.oecd.org/going-digital/ai/principles/
Regan, H. (2024, February 12). Indonesia election deepfake scam uses Suharto AI. CNN. https://edition.cnn.com/2024/02/12/asia/suharto-deepfake-ai-scam-indonesia-election-hnk-intl/index.html
Ruggie, J. G. (2013). Just business: Multinational corporations and human rights. W.W. Norton & Company.
Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34, 1057–1084. https://doi.org/10.1007/s13347-021-00450-x
Simchon, A., Edwards, M., & Lewandowsky, S. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus, 3(2), pgae035. https://doi.org/10.1093/pnasnexus/pgae035
Trends Research & Advisory. (2024). AI rivalries: Redefining global power dynamics. https://trendsresearch.org/insight/ai-rivalries-redefining-global-power-dynamics/#:~:text=The%20growing%20competition%20between%20the,new%20standards%20for%20technological%20dominance.
UN Secretary-General. (2023). Policy brief: A global digital compact. United Nations. https://www.un.org/en/global-digital-compact
Wagner, B. (2022). AI, human rights and due diligence: Towards a new accountability model. International Review of Law, Computers & Technology, 36(1), 1–20. https://doi.org/10.1080/13600869.2021.1997172
West, D. M., Allen, J. R., & Horowitz, M. C. (2021). Governing artificial intelligence: Ethical, legal, and societal implications. Brookings Institution. https://www.brookings.edu/research/governing-artificial-intelligence/
Comments