AP
AP

AP guidelines set the stage for the integration of AI

What To Know

  • While these guidelines are not particularly contentious, there is a concern that less scrupulous media outlets might interpret the Associated Press’s approach as a signal to use generative AI more extensively or even deceptively.
  • Amanda Barrett, the AP’s Vice President for Standards and Inclusion, wrote in an article about their AI strategy that “We do not see AI as a replacement for journalists in any way.
  • The article advises AP journalists to treat AI-generated content as “unverified source material,” subject to the application of their editorial judgment and adherence to AP’s sourcing standards before considering it for publication.
  • The concern is that some outlets, in the quest for a competitive edge, might interpret the AP’s carefully controlled AI use as a green light to prioritize robot-generated journalism in their newsrooms.

The Associated Press (AP) has released guidelines today regarding the use of generative AI in its newsroom. The organization, which holds a licensing agreement with OpenAI, the creator of ChatGPT, has laid out a set of measures that are both cautious and sensible when dealing with this emerging technology. However, it is also emphasizing that its staff should refrain from utilizing AI to create content for publication.

While these guidelines are not particularly contentious, there is a concern that less scrupulous media outlets might interpret the Associated Press’s approach as a signal to use generative AI more extensively or even deceptively.

The Associated Press’s manifesto regarding AI underscores a key belief that artificial intelligence-generated content should be viewed as an imperfect tool rather than a substitute for skilled writers, editors, and reporters who exercise their judgment.

Amanda Barrett, the AP’s Vice President for Standards and Inclusion, wrote in an article about their AI strategy that “We do not see AI as a replacement for journalists in any way. AP journalists are accountable for the accuracy and fairness of the information we share.”

The article advises AP journalists to treat AI-generated content as “unverified source material,” subject to the application of their editorial judgment and adherence to AP’s sourcing standards before considering it for publication.

It acknowledges that employees can “experiment with ChatGPT with caution,” but should refrain from generating content fit for publication. This restriction extends to images as well.

Additionally, the AP does not allow the use of generative AI to add or remove elements from photos, videos, or audio, as per its standards. An exception is made for stories where AI-created art is the focus, but such instances must be clearly labeled.

Barrett also highlights the risk of AI contributing to misinformation and advises AP journalists to exercise caution and skepticism in verifying AI-generated content. To maintain privacy, the guidelines prohibit writers from entering confidential or sensitive information into AI tools.

While these rules are generally reasonable and uncontroversial, other media outlets have been less discerning. Earlier this year, CNET published AI-generated financial explainer articles that contained errors without proper labeling.

Gizmodo also faced criticism when it published a Star Wars article with inaccuracies. The concern is that some outlets, in the quest for a competitive edge, might interpret the AP’s carefully controlled AI use as a green light to prioritize robot-generated journalism in their newsrooms. This could result in the publication of poorly edited or inaccurate content without proper labeling of AI-generated work.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *