Is AI Taking Over Journalism? An Analysis Approach 

August 14, 2024

Khaled Walid & Saja Mortada


At the end of 2022, when ARIJ started working on its Artificial Intelligence Strategy for Small & Medium size Newsrooms/ Media Institutions, the debate had not yet heated up on  the threats that this massive technological transformation poses to journalism. The discussion was focused on opportunities, which are many, and it is important to remain focused on the opportunities, but without ignoring the threats, especially the ethical  ones, that may turn AI advantages for journalism from a blessing into a curse.


In its strategy, ARIJ points out that “AI in Journalism continues to develop, and what is more important than arguing about its negative effects, is knowing how to benefit from it for journalistic work. AI may soon reshape journalism, which will raise many questions about its purpose and meaning, and what media institutions need to focus on and what they need to dispense”.


As for the ethical dimension, ARIJ focuses, in its strategy, on several points, including the negative effects that AI may have on journalism, such as violating privacy, bias, and publishing false & misleading information that could lead to a loss of confidence in the media institution, loss of jobs, in addition to weak accountability. “A change in the role of the journalist: The role of the journalist can change as technology advances. Journalists can become more focused on monitoring and evaluating the performance of intelligent systems and validating the information they produce”.

`

Will our role, as journalists, really shift from being the primary sources of news, the producers  of investigative reports, the storytellers,, to “observers and assessors  of the performance of AI tools?” An important question, and debate rages around it a lot. What is certain is that our role as journalists will not lose its true human value, no matter how technology enters into it. But what is most dangerous is when media institutions and journalists use AI in the wrong way, leading us to reach this disastrous result.


Our role as journalists should not change with the introduction of AI, but rather it must develop by taking advantage of AI to get rid of repetitive tasks that waste time and effort, and to speed up work on our stories, in addition to giving us the opportunity to work on more complex, important and humane stories, which AI cannot offer.


AI helps us collect, design, and analyze data. It also helps us translate reports, transcribe long interviews, helps us check photos and videos, and summarize long reports However, AI cannot perform many of the most prominent tasks of a journalist, including:


  1. Investigative journalism: it requires extensive research, in-depth sources analysis, and the ability to discover hidden facts through interviews and field reports. AI can help analyze data, but it cannot replace human intuition and judgment in investigative journalism.
  2. Understanding context: It can be difficult for AI to understand the broader context of any story, including its historical, cultural, social, economic, and religious aspects, which is often essential for accurate and humane reporting.
  3. Interviewing skills: Conducting interviews, especially on sensitive topics or those that require understanding, human communication, and a high intuition for the ability to establish relationships. AI lacks the emotional intelligence and adaptability to handle such situations effectively.
  4. Ethical decision-making: In journalism, editorial staff and journalists often have to make complicated ethical decisions, such as protecting sources, reporting on sensitive topics responsibly, and ensuring privacy. AI lacks the moral judgment that humans have.
  5. Creativity and Narrative: While AI can produce content, it usually lacks the creativity and storytelling skills that journalists have, which connects the audiences on a human level.
  6. Adaptability: Journalists often need to quickly adapt to changing situations, assess the credibility of new information, and make immediate decisions on the ground. AI is trained in advance and may have difficulty adapting to real-time changes.
  7. Verifying information and Fact checking: AI can help, through some tools, in verifying images and videos, but it cannot work on linking data, communicating with sources, or deducing correct from incorrect information. Ask ChatGPT about that and you will see for yourself!


AI will not change our role as journalists as much as it will change the way we work. But will there be a loss of some jobs in media institutions due to AI? The answer here depends on how media institutions adhere to ethical principles, as well as their priorities, strategy, and how they operate. However, AI will create new skills within media institutions, such as understanding data, technological culture, and using tools. It will also create job opportunities in many fields such as programming and data science, in addition to pushing journalists to collaborate with technologists to increase the opportunity to confront ethical challenges.


Using AI won’t turn us into observers, our journalistic work will become more important, and our oversight role will also preponderate the machine that cannot work without our guidance and review. What about other ethical challenges? What about AI that contributes to generating deep fakes, disinformation and bias?


All of this is expected, and we experience it a lot today. How many times have you asked ChatGPT a question and its answers were not accurate? Alot! How many times have you noticed the spread of AI-generated images on social media? Maybe every day, and how many times have you seen media institutions apologize for mistakes they made while using AI?


As media entities,  in the Arab world, we must take into account that we are now still in the  experimentation  phase. We may make mistakes, but at the same time we must take many measures that help us reduce these mistakes and learn from them, the most important of which is pre-publication fact-checking and committing to transparency.


By committing to pre-publication fact-checking, media institutions can detect any AI-generated content , image, or video before it is published to the public, thus reducing its damage. Through pre-publication fact-checking, we can also protect ourselves from any ethical and legal consequences from publishing any fabricated or misleading information unintentionally generated by AI.


Committing to transparency, by sharing our AI strategy, the data we used to train an AI model, and acknowledging any errors made and correcting them, we will be exercising basic ethical considerations that will contribute to minimizing any significant ethical damage to our work. In its AI strategy, ARIJ advises all media Institutions to publish the “methodology” and “data” they worked on in developing any tool, system, or story based on AI. This will not only enhance transparency but will increase confidence in the institution.


AI may threaten journalism, but it may not do so if we follow specific strategies that leave room for thoughts, analysis, study, discussion and development among the journalistic community for a long time, some of these strategies are the following:


  1. The first way to use AI correctly is to understand it correctly. Without awareness of AI within media institutions, we can never guarantee its proper use by journalists and institutions. Continuous training and learning are also a necessity, because it is a rapidly evolving world, and catching up with it must be quick.
  2. Entering the world of AI shouldn’t be just to follow the “trend”. Our goal should be to understand how AI will help us in our work, and use it only for that goal. Therefore, every media institution must determine its priorities and reasons to use AI, and use it accordingly.
  3. Collaboration between media institutions and technology companies on the one hand, and between journalists and technologists on the other hand. In addition to that, keeping the discussion open between journalists and technologists is very essential to ensure that we, as journalists, remain the owners of the ideas, the decision makers, and the real reviewers. Technologists cannot work alone on tools that benefit journalists. Rather, journalists must be a fundamental part of this collaborative work from the beginning.
  4. Committing to Journalism ethics and standards more than ever: We are in a sensitive phase, where the journalistic community’s understanding of AI is still weak, as AI is being used incorrectly, while some are exploiting it to achieve certain agendas, or without informing their audiences about it. If ethical standards are not our priority, we will fail the test of credibility and public trust. “Transparency is the key” AP’s Garance Burke emphasized during the Global Investigative Journalism Conference in Sep 2023 , “We must educate  the public more about our research, our reporting, our methodologies, and the tools we use, so they feel like they have some ownership in understanding what we uncover in our investigations”.
  5. Encouraging investigative journalists to work on investigations that highlight the wrong and exploitative use of AI, encouraging fact-checkers to focus on correcting any information, image or video that may have been generated using AI and educating the public about its dangers. In this context, Burke confirms that many journalists may hesitate to start working on AI-focused investigations, but the same curiosity that underpins any investigation is also the starting point for understanding the world of algorithms.


AI is not a competitor to journalism, contrarily, with the appropriate use, AI can be considered a partner in enhancing the role of journalism and further creating impact in the world. While it’s important to use AI, the most important thing is using it for a specific purpose, wisely, and for the benefit of our journalistic work.