Natalia Zuazo is a political scientist, journalist, and consultant specializing in politics and technology. She advises governments and international organizations such as UNESCO, where she is a Senior Consultant on ICTs and Artificial Intelligence for Latin America. She leads SALTO, a techno-political agency, and has spearheaded key projects such as PubliElectoral, which promoted transparency in digital advertising. She has taken part in training more than 15,000 journalists and public officials in AI, human rights, and technology, and coordinated UNESCO’s Artificial Intelligence and the Rule of Law program in the region. She participated in the creation of regulatory frameworks such as UNESCO’s Recommendation on the Ethics of AI and the project to update Argentina’s Personal Data Protection Law. She is the author of influential books such as Internet Wars and The Owners of the Internet, and has worked with organizations like Access Now and Privacy International. Her approach combines communication, research, and critical theory, addressing issues such as technological inequality and digital rights. Recently, Natalia joined the International IDEA team as a facilitator of the Artificial Intelligence for Electoral Actors workshop held in Panama City, where we spoke with her about her views on the implications of AI for our democracies.

What do you consider to be the main risks posed by the use of artificial intelligence in electoral processes in Latin America and the Caribbean, especially in contexts where there are already structural challenges such as disinformation and low institutional trust?

Indeed, the risks posed by the use of artificial intelligence (AI) in the electoral processes of Latin America and the Caribbean must be added to the challenges the region already faces, such as chronic disinformation, low trust in democratic institutions, and the steadily decreasing levels of voter participation. In this context, AI—as a factor that generates additional distortions in citizen information processes and brings new tools to campaigns—can become a tool that deepens these problems. That is why it is important to understand and address the challenges it poses through ethical principles or regulations, not to halt innovation, but so that technological developments are guided by a value system that is also democratic.

First, AI allows various actors—not always identifiable—to deploy large-scale disinformation campaigns, amplified by networks of bots and trolls that distort public debate and further erode citizen trust. In addition, the use of micro-targeting techniques and predictive analytics in campaign strategies can lead to sophisticated and intentional interventions in public opinion, exploiting personal data to direct messages designed to reinforce echo chambers and polarization. This not only limits democratic debate but also fosters an environment where building consensus (and even constructive dissent) becomes increasingly difficult.

Furthermore, the vendors of AI systems that impact elections—whether by categorizing people, creating and distributing information, segmenting informational environments, etc.—are not particularly interested in the democratic system. Many of these actors are primarily focused on maximizing economic profits rather than contributing to the quality of public debate. As Giuliano da Empoli points out in The Engineers of Chaos, behind the sellers of these technologies there are not just neutral algorithms, but strategists who have understood how to turn informational disorder into a powerful tool of influence.

That is why those of us who still believe that democracy is an imperfect system, with room for improvement but the best we have, are the ones who must infuse these non-neutral technologies with values. In the electoral sphere, this means creating spaces to develop regulatory frameworks and governance practices that ensure AI use responds to principles of fairness, transparency, and justice—and not solely to the economic or political interests of those who hold technological power or sell these tools.

From your experience, how can AI contribute positively to strengthening electoral integrity and democratic participation, particularly among historically marginalized communities such as women, Indigenous peoples, and LGBTQI+ individuals?

Electoral integrity and democratic participation have been built, throughout history, with different technologies: censuses, electoral systems, voter registries, ballot boxes, geographic segmentations, and vote-counting systems that have evolved with the incorporation of technologies to obtain results with greater efficiency and transparency. In the case of AI, we can envision new uses gradually being incorporated, such as maintaining and verifying voter lists, optimizing polling station locations, and implementing voter authentication systems, among others.

However, it is important that the inclusion of these technologies does not undermine basic human rights; otherwise, the benefits of incorporating them must be carefully assessed. For example, many electronic voting models have been discontinued because their shortcomings outweighed their advantages.

At present, we see that the negative effects of AI outweigh the benefits. We know that deepfakes, widely used in campaigns, target women in 98% of cases, seeking to silence their opinions or intimidate them. Among these, female politicians, activists, and journalists are among the most frequent victims—not only of trolls or anonymous accounts but also of named politicians who attack them. We are living in a moment of enormous violence that requires collective solutions more typical of “other times”: widespread condemnation of attacks, strengthening of shared spaces, and the search for dialogue spaces that truly respect all voices.

In terms of democratic governance, what regulatory frameworks or ethical principles should electoral bodies in the region prioritize to ensure responsible and transparent use of AI in elections?

To ensure responsible and transparent use of artificial intelligence in elections, electoral bodies in Latin America should prioritize regulatory frameworks that combine human rights, transparency, and accountability principles. This includes applying existing rules on personal data protection, freedom of expression, non-discrimination, and access to information, in line with the American Convention on Human Rights and the standards of the Inter-American Commission on Human Rights’ Special Rapporteur for Freedom of Expression. These rules impose an active obligation on states to protect rights and ensure that AI use does not undermine democratic participation or citizens’ privacy.

In addition, it is essential to promote a risk- and rights-based approach, such as the Council of Europe’s Framework Convention on AI, Human Rights, Democracy, and the Rule of Law, which emphasizes the need to identify, assess, and mitigate AI risks to democratic processes. This framework calls for human oversight, transparency, and periodic audits—measures that allow electoral authorities to anticipate and respond to challenges posed by AI, including disinformation and the use of deepfakes.

Lastly, electoral bodies can draw inspiration from regional examples, such as Brazil’s Superior Electoral Court Resolution 23.732/2024, which requires watermarks on AI-generated content and bans deceptive deepfakes and chatbots, or Chile’s AI bill, which classifies as “high risk” technologies that affect fundamental rights and requires transparency and human control. These sectoral frameworks provide a concrete path to strengthening electoral integrity and public trust.

What key lessons can be learned from other regions of the world that could be useful for Latin America and the Caribbean in implementing AI technologies in electoral processes?

One key lesson from other regions, such as the European Union, is the importance of a preventive and risk-based approach to implementing AI in electoral processes. The EU’s AI Act and Digital Services Act (DSA) establish that systems used in the administration of democratic processes—including personalized political advertising, deepfakes, and AI-generated misinformation—are considered high risk and must be subject to transparency requirements, human oversight, and independent audits. This allows the potential impacts of technology on electoral integrity to be anticipated and mitigated before harm occurs.

Another lesson is the value of technological transparency and traceability. For example, rules require AI-generated content to be clearly identifiable (e.g., via watermarks on deepfakes or labels on automated political ads), which facilitates citizen and authority oversight of information flows. This principle also means requiring technology platforms to take active responsibility for assessing and reducing misinformation risks during electoral periods—something that could be replicated in Latin America in light of the growing use of digital platforms in campaigns.

Does this mean the European Union has fewer problems in electoral processes than our region? Not necessarily. Moreover, technological advances often outpace the political consensus needed to mitigate their risks. But I believe that in Latin America and the Caribbean, much more can still be done to adopt more homogeneous standards to strengthen public trust and protect democracy from technological risks, which states can then enforce.

Considering the rapid advance of generative AI and its potential to create misleading content, what role should the media and civil society play in digital literacy and oversight of these technologies during electoral periods?

The media have a fundamental role to play. In a context where they are also affected by their own economic crises, business model struggles, and the loss of old revenue margins to technology platforms, the temptation to spread misleading content to increase their audiences is certainly strong. At one time, social media platforms used to claim that they were neutral regarding content distributed by third parties, including media outlets. Today we know that, through their algorithms and recommendation systems that reward engagement, they are not neutral. And the media, which used to challenge those claims, now—whether due to their own digitization processes or because they were born in the digital environment—also replicate practices that reward extreme emotional reactions, reproduce violence, and promote polarizing opinions.

That is why so-called digital literacy—which is nothing new, but rather being aware and critical of how information circulates, where it originates, and the intentions of those who produce it—is essential. It is vital for everyone: not only for children and adolescents, but also for adults, who, due to our own biases, often share unreliable information or consume the very things we criticize. With AI, this becomes even more challenging, because it is increasingly difficult to discern not only “the truth,” but also the very foundations on which things are based. There too, a great deal of work awaits us.