News organisations push for AI regulation to safeguard public trust in media

You are currently viewing News organisations push for AI regulation to safeguard public trust in media
<span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix=""></span> <span class="bsf-rt-display-time" reading_time="2"></span> <span class="bsf-rt-display-postfix" postfix="min read"></span></span><!-- .bsf-rt-reading-time -->

A number of the world’s largest media organisations have assembled to call for greater transparency in regard to the training of generative AI models. In an open letter to policymakers published yesterday, they ask to be involved in creating standards for the use of artificial intelligence, especially as it relates to intellectual property rights. 

With generative AI, it is now possible to produce and distribute synthetic content at a previously unimagined pace and scale. The threat, the letter states, is that the irresponsible use of the technology could come to endanger the media ecosystem as a whole by eroding the public’s trust in the independence and quality of content.

The signatories of the letter say that they support the responsible advancement and deployment of generative AI technology. However, they also believe that “a legal framework must be developed to protect the content that powers AI applications as well as maintain public trust in the media that promotes facts and fuels our democracies.”

Guidelines for AI training and disclosure

In the letter, titled “Preserving public trust in media through unified AI regulation and practices,” they also lay out priorities when it comes to regulating the rapidly advancing technology. 

These include transparency as to the makeup of all training sets used to create AI models, consent of intellectual property rights holders for the use of their material, and the collective negotiation between media groups and AI model operators and developers. 

Several media companies and artists have sued AI developers for copyright infringement. For instance, Getty Images filed a case against Stability AI in February, and comedian Sarah Silverman against OpenAI last month.

But there is also a precedent for collaboration. In July, OpenAI and The Associated Press made a deal for the GPT developer to licence AP’s archive of news stories. The parties did not reveal the financial details of the deal. 

The letter writers also demand that generative AI models and users “clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content,” and take steps to eliminate bias and misinformation from their services. 

Far-reaching implications of unchecked AI deployment

Generative AI has been hailed as the next frontier in productivity, and studies suggest it could add up to $4.4 trillion (€3.99 trillion) of value to the global economy yearly. Meanwhile, concerns of its applications range from fake online reviews to dissemination of disinformation, mass surveillance and discrimination, job losses, and even the eventual extinction of the human race. 

Among the organisations behind the letter is the European Publishers’ Council (EPC), a high level group of Chairpeople and CEOs of leading European media corporations. Since 1991, the group has lobbied on over 250 different EU proposals and directives.

Agence France-Presse, European Pressphoto Agency, Gannett | USA TODAY Network, Getty Images, National Press Photographers Association, National Writers Union, News Media Alliance, The Associated Press, and the Authors Guild also signed the letter.