Journalism students see an industry in crisis. It’s time to talk about it

The 7 challenges facing the media in the face of artificial intelligence

Artificial intelligence (AI) promises a profound transformation of the media industry and, more globally, of the way we produce, consume and value information. From awe to the exploration of new editorial frontiers, we take a look at seven challenges to reinventing journalism and the media in the age of generative AI.

Having barely digested the upheavals brought about by the Internet era and the expansion of social platforms, the media are now facing a major new challenge: artificial intelligence. It’s a groundswell of change even more dazzling, radical and disruptive than the digital revolution that has been shaking the industry for the past twenty years.

In just under a year, the explosion of generative artificial intelligence has shaken up all sectors, including the media. First as a news item, then as a technology capable of profoundly transforming the news factory, from cellar to attic, from information gathering to distribution.

Confronted with an unprecedented drop in public confidence, audiences turning away from news, monetization difficulties and the precariousness of the profession, journalists oscillate between the intuition of unprecedented opportunities and the fear of outright replacement.

However, if we take a closer look at how AI works and what it is capable of today, the emergence of an army of robot journalists remains the stuff of science fiction. On the other hand, the evidence that artificial intelligence represents a tremendous opportunity to reinvent the news business is far more real and concrete.

To put AI on the media side and write a new chapter in the history of journalism, we need to understand how it works, what’s at stake and what the dangers are. Here’s an overview of the challenges posed to the media by artificial intelligence.

1) Breaking the stupefaction and demystifying AI

Since ChatGPT went online in November 2022, a steady stream of new AI-powered features has flooded the media, creating a worldwide sensation of awe.

On the balcony of this revolution, the media are trying to keep pace, chronicling the dazzling and unprecedented prowess of language models, computer programs capable of interacting in natural language and creating textual and audiovisual content that is truer to life. Given its spectacular and unprecedented performance, clearly stepping on the toes of the media, ChatGPT has opened up the prospect of a partial or total replacement of the journalistic profession.

The reasoning is as follows: if AIs are capable of collecting, sorting and formatting information, as well as personalizing response and style for each reader, with bluffing results, then they represent a direct threat to the sector.

This threat is all the more legitimate given that generative AIs are precisely designed to copy and surpass human capabilities in terms of language, reasoning, creativity, planning and decision-making. All this, with promises of exponential efficiency and productivity.

Yet, despite this desire to surpass human abilities, the intrinsic functioning and skills of generative AIs do not make them potential journalists, or even relevant sources.

    However powerful they may be, ChatGPT and other conversational agents have no experience of the sentient world. They are incapable of distinguishing truth from falsehood or reality from fiction, and they have even less capacity to exercise a critical or moral mind.

    They are “black boxes”, whose stages of reasoning cannot be traced, and whose exact sources cannot be identified.

    Furthermore, chatbots cannot be considered reliable search engines, as they are subject to “hallucinations” and biases linked to their operation based on statistical models working by probability. It should also be remembered that the knowledge acquired by generative AIs is intended to train large language models. Its primary purpose is not to serve as a factual database that can be consulted by users.

    Finally, AIs are limited to the production of plausible, highly convincing content, but with no guarantee of accuracy.

For all these reasons, and many more, AIs cannot be considered journalists or credible sources. At best, they can play the role of trainee assistants capable of structuring data to explore a subject, while remaining limited to a purely theoretical, formal and statistical knowledge of a topic.

So we need to get over our stupor and dispel the fantasy of robot journalism. It’s not the job itself that’s being called into question, but specific tasks with low added value and high organizational impact: news batching, article editing, transcription of texts into audiovisual content, automatic production of newsletters, etc. (See point 3).

2) Set standards for AI use and train teams

To guarantee the responsible and profitable use of artificial intelligence within the media, a charter is certainly the most effective tool. It provides a framework for practices and promotes enlightened, reasoned use.

The drafting of a charter is also a mobilizing exercise, offering each media outlet the opportunity to consolidate its editorial DNA, values and missions. At the heart of this reflection are journalistic ethics and questions linked to the relationship with audiences, through the issues of trust and credibility.

Here’s a non-exhaustive overview of the ingredients present in the AI charters drawn up by the media to date:

    Systematic human supervision of content processed by AI ;

    transparency of practices, within editorial departments and vis-à-vis audiences;

    a list of authorized and prohibited uses;

    the links in the news production chain concerned;

    accountability and responsibility for content;

    respect for privacy and copyright;

    strategic objectives ;

    bias management and choice of tools;

    etc.

There are two limits to this exercise. The first is to commit to principles and rules of conduct when it is impossible to predict what technology will allow in the short and medium term. These charters will therefore have to evolve.

The second concerns the guarantee of systematic human supervision for all content produced or edited by an AI. This position closes the door on the creation of autonomous chatbots, as well as the automated generation and distribution of texts and audiovisual productions.

Until now, the automation of content production has mainly been based on systems whose procedures (machine learning) and databases could be controlled. This is how many media have automated the production of specialized content: sports, stock market, election, weather and other results. With AI and its deeplearning process, process control and transparency are no longer possible. This raises the question of editorial responsibility.

As well as drawing up a charter, ongoing team training is an essential step in integrating artificial intelligence into editorial departments. The quality of coverage of AI as a journalistic subject also depends on training. Like climate change, AI is a cross-disciplinary theme that requires specific knowledge and tools.

3) Identify opportunities for productivity gains, without taboos

Among the list of opportunities offered by AI, the automation of repetitive and time-consuming tasks is among the most promising. The lower cost of access to the technology means that we can take a fresh look at problems inherited from the industrial age, which until now have remained unresolved.

All stages of the information production process are concerned, from collection to distribution, editing and formatting. The press is particularly affected by industrial processes studded with manual, low-value-added tasks. This reduces the time and resources available for the fundamental tasks of journalism: finding, verifying, prioritizing and telling the story.

The productivity gains promised by AI could be summed up as follows: less bastardization, more investigation, more reporting and new services for readers.

For over 10 years, editorial departments have been using automation to produce articles based on structured data on sports, the stock market or the weather. Examples that foreshadow others:

    Editing: content classification (tagging), SEO optimization, reformatting and summarizing.

    Processing: text transcription of audio interviews, translation, document synthesis, research in vast databases, data visualization, combating misinformation, etc.

    Distribution and broadcasting: content recommendation, automatic and personalized newsletters.

    New services: production of multimedia content and multilingual adaptation of an editorial offering.

Examples of targeted, more nuanced applications than the vain experiments of CNet in the USA or Bild in Germany, aimed at replacing journalists with machines.

Used to support journalism, AI offers opportunities to go beyond current industrial limits. A dynamic that can help the media to improve production processes, invent new services for readers and generate new sources of revenue.

4) Prepare for the emergence of AI assistants

The adoption of AI by the general public owes much to next-generation chatbots. The ergonomics and performance of these conversational agents have outclassed the traditional search method offered by web browsers. What’s the point of having to wade through pages of links and slalom between advertisements, when all you have to do is ask your personal assistant a question to instantly obtain a clear, circumscribed answer, with the option of refining the expected response? The promise of an “answer engine” rather than a search engine. An answer, rather than a list of links and new searches to undertake.

The other major advantage of AI-powered assistants is the ability to personalize exchanges and responses according to the context of each user. These impressive feats are being further enhanced by deep profiling. This technique combines access to our personal data and public resources, advanced statistical processing, memory capacity and the magic of automatic, continuous and autonomous learning.

“Understanding you better to serve you better” could be the slogan of these new personal assistants. A logic that has begun to take root at the very heart of consumer browsers and software suites, such as Microsoft’s Copilot.

However practical and effective, the hyper-personalization of access to information entails a number of risks for democratic debate. The digital public arena, already largely shaped by the algorithms of the major platforms, risks losing even more ground. The mediation of these opaque conversational agents contributes, in effect, to reducing the surface area of the digital public space made up of information accessible and shared by the greatest number.

This risk is amplified by the lack of transparency of sources and the hallucination phenomenon inherent in artificial intelligence.

Not to mention the known side-effects of algorithmic recommendation: amplification of filter bubbles, bias of all kinds, reinforcement of existing opinions or reduced chances of being confronted with contradiction.

5) Anticipating the decline of the link economy

The PDA revolution is paving the way for a new paradigm in online information sharing and retrieval. They are shaping a world in which the relevance of search engines as we know them today is called into question, and the ecosystem that thrives on the web’s link-based dynamism and architecture is threatened.

Facebook had already breached the founding principles of the web by proposing a closed digital universe, a privatized social web.

For its part, Google short-circuited the mesh of links by introducing the Knowledge Graph system, in 2012, in the form of snippets, those blocks present on the search page that synthesize the answers extracted from websites. This highlighting of content extracts led to a drop in traffic for the sites concerned, as in the case of Wikipedia, which saw visits to its pages fall drastically.

Twenty years after Facebook, it’s the American company perplexity.ai that’s shaking up the web model, by challenging Google on its own turf, search. How do they do it? By offering a hybrid, pay-per-view product for accessing information. Part chatbot, part search engine, a synthesis of the two worlds. Microsoft and OpenAI are exploring the same avenue, based on Bing. On the one hand, access to knowledge available online, and on the other, the convenience of a conversational agent capable of identifying sources, sorting and synthesizing them, while providing multimodal responses (text, sound, video and code).

In so doing, perplexity.ai is reshuffling the cards of online research by defining new rules for accessing information, with incalculable consequences for the functioning and economy of the web.

From the media’s point of view, this great upheaval implies new intermediaries who monopolize the value of journalistic productions, digest information and impose new rules without any financial compensation or guarantee of visibility. Indeed, if chatbots replace search engines to provide turnkey answers, the logic of redirecting Internet users to news sites will disappear or fade.

This dynamic opens up the prospect of a loss of exposure for content, leading to a drop in traffic and consequent loss of revenue, from both the advertising and reader markets.

In short, if it is no longer necessary to visit a site to obtain information, how can we finance media that are deprived of web traffic?

A challenge to the link economy as it has existed since the creation of the web. An economy built on the free circulation of information, interaction, collaboration and the creation of shared value within the World Wide Web ecosystem.

While the ashes of the neighboring rights battle with Google are still smoldering, publishers saw their content downgraded by Facebook last year, depriving them of visibility within the world’s largest social platform.

A wake-up call that is forcing publishers to reinvent their business model, at the crossroads of the attention economy, the link economy and the content economy. Publishers are torn between the desire to benefit from the opportunities offered by AI platforms, the development of paywalls and the prospect of seeing their content plundered by these new infomediaries, without compensation. For the time being, the attitude of the media industry is largely to block, as best they can, the indexing of their content, while waiting for negotiations to take place or for a legal framework to be imposed on the various players.

Faced with these new realities, the media have no choice but to strengthen ties with their readers by multiplying points of contact, developing tailor-made services, certifying information and developing spaces conducive to public debate.

6) Avoid the pitfalls of technological dependency

While dependence on social networks and the battle for visibility of journalistic content in search engines are still major issues, the media are starting a new tug-of-war with AI giants over the exploitation of their content.

For the time being, two approaches can be distinguished. The first consists of blocking access to archives and demanding remuneration for the exploitation of content. This is the position defended by the majority of publishers, such as the New York Times, which has also filed a lawsuit against the creator of ChatGPT. The American newspaper accuses OpenAI of copyright infringement and illegal exploitation of its content to feed its artificial intelligence.

The second posture consists in forging partnerships with providers of artificial intelligence solutions, first and foremost OpenAI and ChatGPT. This is the case for Le Monde (the first French media company to sign such an agreement), Associated Press and the Axel Springer group, which have all negotiated with OpenAI in return for benefits such as the ability to distribute content in the results provided by ChatGPT, exclusive access to AI tools for editorial staff and direct collaboration with technical teams.

As for Google, its teams are working on the Genesis project, which aims “to eventually provide AI-enabled tools to help journalists in their work”. Microsoft has chosen the Semafor site to develop search tools for editors, but also to create summary formats for readers.

By opting for this or that AI tool, the media are exposing themselves to many and varied risks: dependence, security and confidentiality of data processing, vagueness about the legality of training methods and sources, lack of transparency about how they work, variations in costs, and so on. It is therefore crucial to ensure that the technical solutions developed are sufficiently flexible to enable reference language models to be changed easily, should the need arise.

We also need to bear in mind that artificial intelligence and language models are technical and legal black boxes, at the heart of a major battle for technological sovereignty on a global scale. The ability of publishers to label their content, in order to trace its use and exploitation by language models, is an essential step in enabling the media industry to assert its rights. This will require close cooperation between all players in the sector.

7) Arm ourselves against the danger of generalized information disorder

Thanks to the growing accessibility of AI-powered tools, a flood of automated, synthetic and industrially-produced media threatens to overwhelm the web.

Like the destabilization and disinformation operations that take place during election campaigns, the flooding techniques already at work promise to amplify and destabilize democracies. In 2024, half of humanity is due to go to the polls, and the first examples of deepfakes and manipulation are already invading the web and social networks.

By offering the possibility of personalizing content on a large scale, as well as infinitely varying formats (sites, publications on social networks, podcasts, videos, etc.), artificial intelligence further enhances the effectiveness and impact of synthetic content.

Whether the content produced is malicious or not, this phenomenon feeds a great deal of informational disorder, and outlines a world in which it is no longer possible to distinguish with certainty what is true and what is not, whether content has been generated by a machine or by a human.

If the volume of such content were to exceed certain critical thresholds, it would massively enter the learning loop of artificial intelligence systems, and de facto contaminate the databases required for their development. The result of this dystopian scenario would be an informational universe where standardization of knowledge, factually false, misleading or low-quality information and endless reproduction of the same biases would prevail. A world plagued by the poison of doubt, approximation, confusion and systematic suspicion.

In such a scenario, the media will be among the few bastions capable of repelling the assaults of the false, the fake and the plausible. More than ever, they will have to contribute to the diversity of sources and opinions, as well as certifying the news by signing their content with the words: “Written and verified by [this journalist], for [such media]”.

This article was firsted publsished by th European Journalism Observatory newsletter

This article is published under a Creative Commons license (CC BY-ND 4.0). It may be republished provided the original location (en.ejo.ch) and authors are clearly credited, but the content may not be modified.