
By Laura Amigo
As artificial intelligence becomes increasingly prevalent in public communication practices, it raises as many hopes as concerns about the quality of democratic debate. This article revisits these issues by highlighting the limitations of current ethical frameworks and presenting avenues for reflection on more responsible public communication.
The ongoing digitisation of the media, combined with the rise of artificial intelligence (AI), is profoundly transforming the way we communicate publicly. Citizens have multiple access points to information and numerous spaces to express their opinions in public debates, notably by producing, commenting on, reacting to or sharing content. This wide range of means enables them to take action on local, national or global issues, often outside traditional organisational frameworks, according to a logic of connected action.
The resulting reconfiguration of the public sphere is marked by more flexible and participatory communication processes that value individuality. But it is also accompanied by a fragmentation of practices, a blurring of the boundaries between the public and private spheres, and a more chaotic circulation of discourse, where content from diverse sources and of varying quality is juxtaposed. While forms of citizen empowerment are emerging, the utopian vision of the web in the 1990s as a lever for achieving democratic ideals seems to be a thing of the past.
The amplification of phenomena such as hate speech and fake news (which have always existed but were less visible) is contributing to a ‘disenchantment with the internet’: online public debate is becoming a brutal battleground. Furthermore, it remains to be seen whether, despite the wealth of information available, the algorithmic functioning of platforms and search engines does not tend to expose citizens to content that is close to their own opinions – a phenomenon that could undermine the conditions for deliberative democracy.
This question ties in with a broader reflection on the growing role of AI in public communication, a phenomenon with multiple social implications.
AI, between democratic promises and informational risks
Artificial intelligence is now part of everyday life: voice assistants, recommendations, facial recognition, etc. It is increasingly integrated into communication and information professions, such as journalism. Awareness of this permeability is reigniting debates on ethics, democracy and regulation, and raising new questions that remain unanswered.
Like any technical innovation, the development of AI paves the way for changes that are as beneficial as they are worrying. For example, it could promote more informed discussions to help citizens make decisions in the political arena, or support a more inclusive form of democracy. In some public services, AI systems are already being used to improve the accessibility of information. For example, automatic transcription and instant translation tools enable people with disabilities to better understand institutional content, thereby strengthening inclusion.
At the same time, AI carries major risks: bias in the data feeding AI systems, which is often difficult to detect; interference by trolls in electoral processes; and the virality of deepfakes – ultra-realistic hyper-fakes generated with the help of AI. To illustrate this, in recent years several manipulated videos have circulated online: we see Barack Obama insulting Donald Trump, ‘Amandine Le Pen’ – a fictional character presented as Marine Le Pen’s niece – affirming her attachment to the values of the National Rally, and Bollywood actors calling for a vote for the opposition in the 2024 Indian general elections.
In a European context marked by a decline in trust in political and media institutions, there is an urgent need to rethink the conditions for ethical public communication. This means reflecting on the influence of digital technologies on democracy, without succumbing to technological determinism or subscribing to euphoric discourse.
Training in AI: a new challenge
In this regard, media and digital literacy is often presented as a major challenge for strengthening citizenship. It aims to develop skills for a critical view of digital uses and a better understanding of the power relations that structure the media ecosystem. The Council of Europe’s Digital Citizenship Education (DCE) project, which promotes the development of skills and values that encourage citizen participation, illustrates this approach. Beyond this topic, many institutions, companies, NGOs and international organisations have also taken up these issues, adopting a variety of approaches. This complex and rapidly evolving subject calls for systemic responses that combine expertise, public action and citizen participation.
It is with this in mind that the European DIACOMET project, funded by the European Union’s Horizon Europe programme, was launched. It brings together partners from eight countries: Austria, Estonia, Finland, Hungary, Lithuania, the Netherlands, Slovenia and Switzerland – through the Università della Svizzera italiana. Its objective is to contribute to strengthening ethical and responsible public communication by promoting the development of civic resilience in the face of information distortion. The project proposes a framework for action based on research, the production of educational tools and recommendations for public decision-makers, in order to help the various actors – whether individuals, groups or organisations – to deal with the ethical dilemmas associated with public communication.
At the heart of this approach is the development of a concept of dialogic communication ethics, which would provide a framework for an inclusive model of accountability mechanisms combining media accountability (at the organisational level) and civic accountability (at the citizen level) and guided by principles of good communication conduct. Among the themes addressed by the project, the governance of AI in public communication appears to be an area where current frameworks are struggling to keep pace with technological developments.
Regulating AI in public communication: a work in progress
By changing the traditional logic of public communication, AI raises new challenges in terms of governance, transparency and accountability. In this context, researchers from the DIACOMET project have examined how ethical codes and guidelines from various sectors – from journalism to advertising, institutional communication, public relations, small media outlets and users – address the issue of AI. Even as debates on its ethical use multiply in the media, political spheres and associations, regulatory frameworks remain fragmented and unevenly developed.
Of the 435 documents analysed, only 63 (barely 15%) explicitly mention AI or automation. And of these, barely 20 place AI at the centre of their thinking. The others only mention it in passing. The majority of documents dealing with AI come from supranational organisations. They are based on a normative vision founded on fundamental rights and seek to harmonise practices at the European level. Conversely, documents produced at the national level are more heterogeneous, often adapted to specific legal or cultural frameworks.
Thematic analysis of these codes of ethics highlights three main categories of risks, covering the different stages of the AI life cycle.
- Development and operation. Several documents highlight the risks associated with the need for technical and social robustness of AI systems. They point to the risks of content generation errors (such as ‘hallucinations’ or algorithmic biases), privacy violations, legal compliance issues (e.g. human rights), and risks associated with users’ excessive dependence on the capabilities and limitations of the system.
- Legitimisation of use. In this area, AI is often presented as a tool for streamlining or increasing efficiency in managing large volumes of data (automated moderation, resource optimisation, etc.). However, the need for human supervision remains central, particularly in the media, where codes specify that AI cannot replace journalists.
- Deployment. The documents analysed mention the risks associated with content manipulation (bots, trolls or deepfakes) and the confusion between real and synthetic content. Some documents also mention the positive aspects of AI: more participatory and pluralistic processes that promote creativity, accessibility, diversity and inclusion.
In terms of governance, the most frequently cited principles are human supervision, transparency, and data confidentiality and security. In journalism, for example, AI is seen as a tool for improving work efficiency, without calling into question professional standards, which are still presented as essential. The most commonly advocated form of accountability is based on continuous and proactive evaluation of the AI systems used.
Despite the existence of a shared set of principles, no robust and widely adopted regulatory framework appears to be emerging. Existing mechanisms remain too general, and their application is uneven and often incomplete. This situation highlights the need for more dynamic governance mechanisms that are capable of keeping pace with the rapid evolution of technologies and their uses, and of involving all stakeholders.
The DIACOMET project focuses on governance and ethical issues in public communication and journalism. (Photo: DIACOMET)
Towards an ethics of public communication that meets the challenges of the digital age
At a time when the public sphere is being swept by waves of disinformation, polarisation and rapid technological change, there is an urgent need to rethink the ethical foundations of public communication. The challenge is not only to regulate technologies, but also to preserve the conditions for informed, pluralistic and inclusive public debate. This requires the joint mobilisation of public actors, communication professionals, technology developers, civil society and researchers.
This is the challenge that the DIACOMET project aims to address by proposing concrete tools and shared frameworks for public communication that is both ethical and dialogical. In the age of artificial intelligence, preserving the ethics of public communication is no longer just a technical imperative: it is a democratic requirement.
This article was originally published by Ajen Newsletter on December 9, 2025