top of page

Ethical guidelines for the use of artificial intelligence in journalism

  • Writer: Patrícia
    Patrícia
  • Apr 13, 2023
  • 16 min read

Updated: 3 hours ago


Since the Catalan Media Council published its report and decalogue on the ethical use of artificial intelligence in the media in 2021, several guides on ethics, AI, and journalism have emerged, mainly developed by media outlets and news agencies. Most of them appeared in Europe, and all with confirmed publication dates are from 2023 onwards, following the emergence of ChatGPT. Without a doubt, generative AI provided a strong impetus for the publication of codes of conduct.


In this article, I will describe and examine some of the most relevant ones. Later on, I will present the patterns identified among the ethical guidelines.


At the end of the post, you will also find a table with additional ethical guidelines from around the world, which I continue to update. In total, 64 ethical guidelines are linked.


I will keep updating the table as new ones are published. If you know of any that aren't listed and let me know in the comments or here, I will add them.




Ethical guidelines for the use of artificial intelligence in journalism



Wired (2023)

They focus on generative AI, and in this article they explain their rules. The key points: they will not use AI to generate or edit text, but they clearly state that it may be used to suggest headlines or social media copy. They also allow its use to generate ideas for potential topics, for research, or as an analytical tool. "We don’t want our reporters to rely on these tools any more than we would allow them to rely on Wikipedia’s limited information," they affirm.


Regarding image generation tools:

  • They will only publish images generated by artists in which the artist’s creative input outweighs the contribution of the tool. Special care will be taken to ensure that the image does not mimic existing work or infringe copyright.

  • They will not replace stock images with AI-generated ones, since many professional photographers make a living by selling images to such databases. They will not do so, at least not until AI companies compensate the creators of the works their tools were trained on.

  • They may use generative AI for images to brainstorm ideas.



------

If you are considering developing your own code of conduct for the responsible adoption of AI, the decalogue of recommendations found on page 34 can be a useful starting point.

------



Springer-Nature (2023)

Springer-Nature also focuses on large language models (such as GPT) and argues that “publishers must acknowledge their legitimate uses and establish clear guidelines to prevent misuse.”

So far, they have established two basic rules. First, an AI cannot be listed as an author. Second, any use of AI must be disclosed:

  1. No tool will be accepted as a credited author of a research paper. This is because any attribution of authorship entails responsibility for the work, and AI tools cannot assume such responsibility.

  2. Researchers who use tools such as large language models must document their use in the methods or acknowledgements sections. If an article does not include these sections, the introduction or another appropriate section may be used to provide this explanation.



Heidi News (2023)

They affirm that they do not want to turn their backs on technological advances, but believe it is necessary to establish guidelines for the use of AI based on "the ethics that govern" their activity and, "above all, the relationship of trust" with their readers. Once again, this French media outlet focuses on generative AI:


General Principles:

  • AI may be used to facilitate or enhance processes, but human intelligence will remain at the core of all editorial production. No content will be published without prior human oversight.

  • They support the work of journalists, authors, photographers, and illustrators, and affirm that they do not intend to replace them with machines.

Synthetic Texts

  • All published articles will be signed by one or more journalists, who remain responsible for the truthfulness and relevance of the information.

  • Artificial intelligence may assist journalists by refining raw data in the same way that software like Excel does. It may also complement article writing, comparable to an online thesaurus or automatic spell checkers.

  • AI is considered a tool, but in no way a source of information.

Synthetic Images

  • The use of synthetic images will be limited to illustrative, not informational, purposes to avoid creating confusion about real-world facts.

  • They will not publish a synthetic image that could be mistaken for a photograph, except for educational purposes when the image is already publicly available.

  • Any synthetic image published will include a visible label explaining its origin. The caption will mention the AI model used and the main prompt given.

  • Major topics (investigations, reports, etc.) will not be illustrated with synthetic images unless the creations were designed by an artist using AI under their own responsibility.



DPA (2023)

The German news agency DPA will base the ethical use of AI on five rules. In developing them, it appears to have considered the EU’s guidelines for trustworthy AI:


  1. DPA uses AI for various purposes and is open to expanding its use. AI will help us do our work better and faster, always in the interest of our clients and our products.


  2. AI is used only under human supervision. The final decision regarding the use of AI-based products is made by a human. Human autonomy and the primacy of human choices are respected.


  3. Only legitimate AI is used—AI that complies with applicable laws and legal provisions, and respects ethical principles such as human autonomy, fairness, and democratic values.


  4. AI used must be technically robust and safe, minimizing the risk of errors and misuse. When content is generated exclusively by AI, this is done transparently and in an explainable manner. There is always a person responsible for all AI-generated content.


  5. All employees are encouraged to be open and curious about the possibilities of AI, to test tools, and to suggest how they might be used in workflows. Transparency, openness, and documentation are essential.



BBC (2023-24)

The BBC published its six guiding principles and a self-audit checklist for Machine Learning teams (engineers, data scientists, product managers, etc.) some time ago. These are based on public service values and designed to be practical. The BBC hopes this framework  "will be a useful contribution to the development of responsible and trustworthy AI and ML." The guide is reflected in this process:

BBC machine learning

The BBC and Generative AI

Focusing specifically on generative AI, in October 2023 the BBC published three guiding principles to steer its use of AI. All three align with key recommendations from the CIC report:


1- Act in the public’s best interest and use AI in service of the mission of journalism.

2- Empower the human factor, prioritizing talent and creativity.

3- Transparency and accountability.


Later, in February 2024, the BBC shared three frameworks derived from these principles under which it develops interesting projects. These are worth sharing here as strong examples of AI being used in service of journalism’s mission:


  1. Maximize the value of existing content

  2. Deliver new experiences to the audience

  3. Optimize processes: “Make how we do things faster and easier.”


They also detail some use cases:


1- Maximize the value of existing content

  • Translate into multiple languages.

  • Reformat existing content to make it more engaging.


2- Deliver new experiences to the audience

  • A BBC Assistant: chatbot to offer interactive and personalized learning.

  • More personalized marketing: create more personalized texts, images, and videos in content and marketing services.


Optimize processes:

  • Support journalists

    • Explore how to equip journalists with generative AI tools to help them work faster. For example, a “headline helper” (which could offer headline options or summarize an article) to connect them to other material.

    • Simplify how content is organized and tagged.

      • Explore how generative AI can help teams find content within programs through practices like better tagging. This could help them create new forms of content more quickly, such as identifying a clip or set of moments within a program or across programs.


Ethical guide for the use of AI

Also in February 2024, the BBC updated its ethical guide for the use of AI. It specifies that AI use must be guided by its editorial guidelines and core values—especially accuracy, impartiality, fairness, and privacy.

Regarding editorial issues, first, any use of AI must be justified by the task assigned to it; second, there must always be an editorially responsible figure overseeing the development of the system; third, the audience must be informed whenever AI is involved. Moreover, the content must explain why and how AI was involved.


The guide emphasizes the need to pay special attention to bias, "hallucinations," and the plagiarism risks posed by some tools—implying acceptance of mainstream generative AI tools. However, aware of these risks, the guide reiterates the necessity of human oversight and the presence of an editorially responsible person for each AI use. It also states its position on the ethical dilemma of using tools trained on copyrighted content. In this regard, it asserts that any use of AI must consider creators’ rights while also allowing for the creative use of new forms of expression.


The guide also includes a section on the "non-editorial" risks of using AI, such as legal issues, data protection, etc. These are addressed by a multidisciplinary commission called the AI Risk Advisory Group (AIRA). This group includes legal, data protection, finance, commercial, and editorial experts who evaluate cases and make recommendations from both editorial and non-editorial perspectives.

The BBC’s recommendations for the use of AI follow the framework of the three production phases, with specific guidelines for each phase summarized below:


Content creation

In this phase, the BBC states it does not allow publication of AI-generated content. However, it considers some exceptions. First, it accepts the use of generative AI to create graphics. It may also accept other uses—text or image—as long as they don’t affect the editorial perspective or distort the meaning of the information. An example of an acceptable use would be synthetic voice generation, as long as the voice is not cloned.


Content production/research

In this phase, AI can be used to extract and/or analyze information. On the one hand, the BBC sees no risk in using AI for ideation or other creative techniques. On the other hand, it identifies some risks in using AI to analyze information. Given the possibility that the tool may produce "editorially inappropriate" output, it recommends "careful human editorial oversight."


Distribution

Recommendation systems and other personalization tools are considered "editorial experiences," and must therefore receive editorial approval. Their results must align with the BBC’s editorial values—especially impartiality, fairness, and the right to reputation.

Finally, the BBC guide specifies that externally sourced content must also be editorially reviewed by its teams to ensure it aligns with its editorial standards.


The Guardian (2023)

It has formed a multidisciplinary team to develop its own ethical guidelines focused on generative AI. The principles governing the use of these tools are:

  1. For the benefit of readers: As long as it contributes positively to their original journalism and with the specific approval of a senior editor. They will be transparent about its use.

  2. For the benefit of their mission, staff, and the rest of the organization: Aimed at improving product quality and process efficiency.

  3. With respect for the individuals/organizations who create content:They will prioritize tools that respect copyright.


ANP (2023)

The Dutch news agency ANP believes that AI should be approached by its journalists with an attitude of “wonder, curiosity, criticism, and openness.” “It is up to editors to determine whether the use of AI applications in editorial or journalistic productions is relevant and useful. In short, AI is a tool, not a replacement.”


Some key principles are highlighted, relating to the need for AI oversight and the necessary assumption of human responsibility:

When AI plays a role in the editorial process, a person must always carry out the final verification. ANP describes the editorial production chain as person > machine > person. “Thinking and decision-making begin and end with the person. Therefore, we do not use computer-generated or AI-produced content, not even as source material, without a human verifying this information,” they state.

AI may only be used with the authorization of supervisors and editors-in-chief. Special care will be taken to avoid potential errors (hallucinations or bias).



Reuters (2023)

Editor-in-chief Alessandra Galloni and ethics editor Alix Freedman sent a message to the agency’s staff outlining several key points related to the use of AI:


First, Reuters views AI technology—including generative text models like ChatGPT—as an innovation with the potential to enhance its journalism and empower its journalists. Reuters explains that since its founding, it has embraced new technologies to deliver information, "from carrier pigeons to the telegraph and the internet." More recently, it has used automated systems to report on economic topics, which has been essential to meet the speed demands of its clients; therefore, content automation is nothing new for Reuters.


The remaining points of the guidelines are as follows:

Second, Reuters journalists and editors will supervise content created with AI. A Reuters story is a Reuters story, regardless of who produces it or how it is generated, and their editorial standards and ethics apply. If a journalist’s name appears on a story, they are responsible for ensuring it meets these standards; if a story is published fully autonomously, it will only happen because Reuters journalists have determined that the underlying technology can deliver the required quality and standards.


Third, Reuters will inform its audience about its use of these tools. Transparency is an essential part of its ethos, the statement says. Reuters will provide readers and clients with as much information as possible about the origin of a story—from the specificity of the source to the methods used to create or publish it. This does not mean they will disclose every step of the editorial process. But when the use of a specific AI tool is important to the outcome, they will be transparent.


Finally, exploring the possibilities offered by this new generation of tools is not optional, even though they are still examining how to make the best use of them. The principles of trust demand "leaving no stone unturned to expand, develop, and adapt" the news. They also require that Reuters provide “reliable” information. Given the proliferation of AI-generated content, they must remain vigilant to ensure that sources are real. Their mantra: Be skeptical and verify.

In summary, Reuters will leverage AI to support its journalism when they are confident that the results meet their standards of quality and accuracy, with rigorous oversight by newsroom editors.



Financial Times (2023)

"Our journalism will continue to be reported, written and edited by humans who are the best in their fields" is the message that opens the letter the FT editor sent to the team. The guidelines, as in most cases, are focused on the uses of generative AI.


After explaining what generative AI is, the letter highlights the technology’s opportunities: improving processes and productivity. It then mentions the risks: lack of rigor, fabricated facts, biases...


It expresses confidence that journalism will become even more essential in this context:

"At a time when misinformation can be generated and spread quickly and trust in the media has generally declined, at the FT we have a greater responsibility to be transparent, report the facts and seek the truth. That is why FT journalism in the new AI era will continue to be reported and written by humans who are the best in their fields and who are committed to reporting and analysing the world as it is, accurately and fairly."


At the same time, it explains that the FT will explore new uses of AI to support journalism, and responsibly, it notes: "as recent history has shown, excitement must be accompanied by caution in the face of the risk of misinformation and the corruption of truth. The FT will remain committed to its core mission and keep readers informed as generative AI and our own thinking evolve."



USA Today (2023)

In this case, the responsible editor must approve the use of AI in the news piece, and that approval is based on purpose — that is, whether the use of AI aligns with the outlet's mission. Once approved, the use of AI must adhere to the following principles:


  • Transparency and accuracy: All AI-generated information must be verified. Be transparent, disclose when AI has been used, and fact-check any AI-generated content.

  • Editorial judgment: Ensure the outlet’s values are reflected in all content.

  • Permitted use: Ensure that AI is used ethically and legally.

  • Generative AI (images): Limit its use. Synthetic images must never be used to illustrate breaking news. They must be clearly labeled to avoid any doubt that they represent real events.

  • Privacy: Ensure that AI use does not violate users' privacy.

  • Bias: Make sure the content is not discriminatory.

  • Safety: Prevent any kind of error or lack of accuracy in synthetic content.

  • Diversity and inclusion: Content must reflect the diversity of the audience.




EFE Agency (2024)

In its Nuevo libro de estilo urgente, the EFE news agency decided to include a section on the topic, emphasizing the need to establish clear principles for the use of AI that adapt to its rapid evolution. EFE acknowledges that certain AI applications can be useful as support tools for journalists, although they should never replace their work. It highlights the importance of constantly updating and supervising these tools, given their tendency toward inaccuracies and lack of verifiability. In addition, the use of such tools must be clearly indicated, including a note at the end of the text specifying its automatic origin. A summary of the key points:

  • Only a support tool. AI systems may, in some cases, assist EFE journalists in their work, but they are never to perform journalism themselves; that role belongs solely to journalists.

  • Caution and constant updates. The rapid evolution of these tools requires continuous updates to the relevant guidelines.

  • Supervision is always required. Every contribution made by AI tools must be reviewed, checked, and verified. It is not necessary to cite the use of tools such as translation or transcription software, just as other editorial aids are not cited.

  • Images. EFE will not use AI-generated images to illustrate current news. These systems are only considered acceptable for illustrating discoveries that cannot be photographed, for example, what an animal reconstructed from a fossil might look like, or how space might appear from an inaccessible location. In all cases, whether the recreations are produced internally or provided by others, their nature must be clearly and unequivocally indicated in the caption or accompanying video text.

  • Automatically generated content based on identified sources. News created in this way must include the source in the text, like any other, and also carry a clarifying note at the end: This information has been generated automatically from data provided by...



El País (2025)

This newspaper has recently adapted its style guide to include a set of rules for using AI. Through an article by the reader's advocate, we know that the main criteria it follows are human supervision of any AI usage, the appointment of a responsible editor for its use, and transparency in communicating to readers when this technology is involved.


The newspaper emphasizes that particular attention should be paid to any use of AI that could alter the meaning of a message, "that could be confused with reality, could mislead, spread disinformation, or violate privacy and the right to one's image."


Regarding the specific case of the illustrators collaborating with El País, the newspaper allows them to use AI, provided they communicate this, and according to the newspaper's Art Director, Diego Areso, "the elements created by artificial intelligence should be a secondary ingredient, subject to the overall concept of the illustration, and should not diminish the quality of the published material."


Furthermore, all employees are required to inform the editorial team about the AI tools they use, so that it can be assessed whether they pose a risk to the security of the group's digital infrastructure.

The annex to the style guide, which the newspaper has not yet fully detailed, is based on the premise that AI will serve professionals to enhance the quality of journalistic work and not replace it.



The New York Times (2024)

The New York Times recognizes that artificial intelligence represents a fundamental shift in knowledge work. While acknowledging the risks of its use, it believes that banning AI is not a viable solution, as restrictions could encourage the unregulated use of AI tools within the newsroom. In this context, it has opted to establish clear guidelines for its implementation.


At the same time, it maintains a critical stance toward generative AI models, as evidenced by its lawsuit against OpenAI, but seeks to harness their potential within well-defined boundaries.


Permitted uses of AI

  • Generation of SEO-optimized headlines

  • Article summarization

  • Brainstorming and editing support

  • Research


Prohibited uses of AI

  • Significant writing or rewriting of articles

  • Uploading copyrighted material

  • Circumventing paywalls

  • Using generative AI to create images or videos, except in reports about the technology itself


Tools

The NYT has defined a set of authorized tools for its team, including Google Vertex AI, GitHub Copilot, and a restricted version of OpenAI API (only with legal approval). Additionally, it has developed its own summarization tool, called Echo, to assist with certain tasks without compromising editorial control.

In May 2024, it published its principles for the use of generative AI. These are broad points, and what stood out to me was the explicit reference to the usefulness of journalistic principles for governing AI. The exercise we did for the CIC report was precisely to analyze AI tools and applications and confront them with the core ethical values of journalism.


The ten recommendations for the ethical use of AI emerged from that exercise. The three principles that will govern the use of generative AI at The New York Times are:


  1. As a tool serving its mission: Generative AI can enhance their journalistic capabilities, helping them uncover the truth and making The Times more accessible. While not a magic solution, it is a powerful tool.

  2. Human guidance and supervision: The experience and judgment of their journalists are essential. Generative AI can assist, but the work will always be managed and supervised by journalists, ensuring accountability and accuracy.

  3. In a transparent and ethical manner: Journalistic principles apply equally to AI. They will inform readers about the use of AI and how they mitigate risks such as bias and inaccuracy, maintaining their high standards and journalistic ethics.



Reporters Without Borders (2023)

RSF (Reporters Without Borders) presented what it has called the 'Paris Charter' on AI and journalism, which is summarized in the following points:

  1. Ethics must govern technological decisions within the media.

  2. Human judgment must remain central in editorial decisions.

  3. Media outlets must help society confidently discern between authentic and synthetic content.

  4. Media must engage in global AI governance and defend the viability of journalism when negotiating with tech companies.


Patterns we can extract from the analysis:


After examining guides and recommendations, some patterns emerge regarding the guiding principles predominant in media outlets when adopting generative AI. We observe that the ethical values of truthfulness, responsibility, and transparency are likely the most significant when articulating their recommendations. As a general principle, the use of AI is conditioned by whether it aligns with the media's mission, which entails acting in the best interests of the public and society. Human responsibility in the informational product is one of the most present directives. The ethics and values of traditional journalism remain fundamental. Below are some key points:


Transparency and human oversight:

  • Most recommendations highlight the importance of transparency in AI usage, specifying clearly when AI is used in content production.

  • Emphasis is placed on human oversight as an essential requirement to ensure rigor and avoid other risks.

Responsibility and verification:

  • Human responsibility is a recurring theme, with an emphasis that AI tools cannot take responsibility in matters of authorship.

  • Human verification is seen as a crucial step in the editorial process.

Limitations on the use of generative AI:

  • There are precautions about using generative AI for generating text or images, with varying approaches: from not using it to generate or edit text (Wired) to limiting its use and clearly specifying when it has been employed (USA Today). Intermediate positions (BBC, El País) seek a balance between respect for creators and some freedom to use AI when it benefits the final producer.

Copyright and other legal matters:

  • The importance of respecting copyright is emphasized, and AI should not be used to generate content that may infringe on rights.

  • Regarding generated images, guidelines are established for their use, clear labeling, and limitations to avoid confusion with real events.

  • Most guidelines insist that the use of AI must comply with applicable laws and follow ethical principles such as human autonomy, fairness, and democratic values.

Commitment to diversity and inclusion:

  • Although to a lesser extent, the importance is noted that AI-generated content reflects the diversity of the audience and avoids discriminatory biases.


These patterns reflect a broad consensus that the principles and values of journalism provide an appropriate guide for assimilating AI tools into processes. The ethical principles to guide the use of AI must be based on the ethical principles and values of journalism, and they must reinforce them. In this regard, the recommendations from the CIC report, based on journalistic ethical values (truthfulness, responsibility, fairness, and freedom), serve as a fitting guide for the adoption of AI in media outlets.



Algorithms in the newsrooms. Ethics Artificial intelligence and journalism



Table with more ethical guidelines


[This post has been automatically translated from catalan to english]

Comments


On social media

  • X
  • LinkedIn Social Icon
Archive
Search by tags
bottom of page