top of page

Artificial Intelligence and Journalism: research and ethical reflection crystallising in the Deontological Code

  • Writer: Patrícia
    Patrícia
  • 6 days ago
  • 5 min read

On November 8, the reform of the Deontological Code of the College of Journalists of Catalonia was approved, incorporating updates and thematic annexes to adapt to the current challenges of journalism. Among these, the new annex specifically addressing the use of Artificial Intelligence (AI), which coincides with the anniversary of the unanimous approval in the Parliament of Catalonia of the law that enabled the creation of the College of Journalists on November 8, 1985.


In recent years, the adoption of AI has rapidly consolidated within media organisations, transforming journalistic routines in Catalonia and around the world. While media make use of AI to optimise editorial processes, to investigate, produce or distribute content, this integration poses substantial ethical challenges that threaten the credibility and fundamental principles of the profession.


For years, different perspectives, experiences and research projects have helped open new paths, formulate questions and build conceptual frameworks that have contributed to anticipating dilemmas and guiding the ethical analysis of AI in journalism. And faced with this new technological acceleration, which then generated a multitude of deontological doubts, the Consell de la Informació de Catalunya (CIC) took the initiative to address this issue with the report “Algorithms in Newsrooms: Challenges and Recommendations to Ensure that Artificial Intelligence Embeds the Ethical Values of Journalism”, published in 2021.


The CIC report, developed from expert opinion and sector reflection, provided a reference framework and stated that the fundamental ethical principles of journalism —truth, justice, freedom and responsibility— remain unchanged and must guide any new technological tool. In 2023, I also launched a constantly updated project consisting of a compilation of more than 70 international ethical guidelines on the use of AI in journalism, which has made it possible to identify shared patterns and global lines of convergence between ethical adoption criteria.


All this work was born from the conviction that the profession needed references to address the new dilemmas that AI introduced, and over these years we have been able to see how that initial purpose has gradually crystallised into standards and orientations that now guide the profession. More recently, this research has transformed into numerous knowledge transfer collaborations that have contributed to fostering debate in national and international communication forums; dissemination from within media organisations themselves; have allowed academic research to expand; and finally, have led to training initiatives and the drafting of sectoral style guidelines for media outlets such as Prisamedia, CCMA, the EFE Agency, among others.


The most recent example of this crystallisation can be seen in the AI Deontological Annex that has just been approved, the result of the effort of the College of Journalists of Catalonia and its Working Groups, in coordination with the Consell de la Informació de Catalunya. This document aligns with the recommendations already put forward in 2021 by the CIC and with the patterns journalism has been adopting nationally and internationally to advance towards a responsible adoption of AI in the media.


The similarities are evident in fundamental areas such as the need for human supervision, the obligation of transparency towards the public, the prevention of algorithmic bias through ensuring data quality, and the promotion of continuous training and interdisciplinarity within teams. Thus, reading these points in parallel allows us to observe the evolutionary continuity of the ethical framework that the sector has been building in recent years.


Below is a synthetic comparison between the principles set out in the AI Annex of the Deontological Code and the Recommendations of the CIC published in 2021. Reading these points in parallel shows the coherence and evolutionary continuity of the ethical framework that has been developed and matured within our professional ecosystem:


Maintenance of fundamental journalistic values (truthfulness and rigour). Both coincide that AI cannot compromise journalistic standards and that fundamental principles remain immutable.


Responsibility and human oversight. They establish that final responsibility does not lie with technology, but with people, and that human oversight is required.


Transparency and accountability. They reinforce the need to clearly inform of the degree of AI involvement in the production of journalistic content.


Prevention of bias and discrimination. They emphasise the importance of data quality and the need to detect and avoid the reproduction of biases and stereotypes.


Privacy protection and data management. They establish that privacy is a fundamental right and that the collection of personal data must be limited to what is strictly necessary.


Risk of personalisation (filter bubbles and pluralism). They warn that algorithmic personalisation can negatively affect pluralism and access to a diversity of voices and perspectives.


Deceptive synthetic content (deepfakes). They establish clear restrictions regarding the creation or dissemination of synthetic content that could mislead the public.


Training and multidisciplinarity. They coincide in the need for continuous training and mixed teams that combine technical knowledge with journalistic criteria.


In addition, the Annex incorporates more specific obligations that respond to the particularities of generative AI and that are consistent with the international patterns observed in the compilation of ethical guidelines analysed in recent years. Specifically, it directly addresses the issue of creation rights, establishing the duty to ensure that AI-generated content “does not violate rights such as copyright, intellectual property or image rights”, which has become critical since models began to be massively trained with data and works protected by copyright. It also sets clear guidelines regarding synthetic content, stating that deepfakes and realistic simulations “do not meet the minimum requirements to be considered valid in journalistic contexts” when they represent real people or events.


Ultimately, the work that began back then arose from the perception that journalism needed orientation to face the new challenges that AI posed, and it is deeply rewarding to see how this shared effort —from many people and diverse institutions— has contributed to consolidating criteria that today are part of the normative framework that guides us and strengthens the ambition for responsible journalism in the AI era.


The approval of the new Deontological Code took place within the framework of the Journalists’ Congress held this past weekend, where I participated in a session moderated by Mariola Dinarès i Quera on the responsible adoption of AI in the media. I’m sharing some of the points we discussed:


  • I believe media organisations cannot simply say “no” to AI: it not only optimises workflows… it enables journalism that was not possible before.

  • However, it is essential to define clearly where AI should be — and where it should not.

  • The important debate is not so much technological, but ideological: who controls infrastructures, sovereignty, and how we question the narrative of productivity as the only lens.

  • We should think in terms of complementarity, not substitution.

  • Ethics is not pure chemistry. The use and design of tools matters. And each media organisation needs its own criteria, aligned with its values and its strategy.

  • Corporate tools and personal tools can coexist — but this requires order, clear policies and boundaries.


The videos of the congress will be available very soon! Then I’ll share the full recording of my intervention here.

ree


ree

Comments


On social media

  • X
  • LinkedIn Social Icon
Archive
Search by tags
bottom of page