People walk past The New York Times building in New York City.

Andrew Burton | Getty Images

Newsroom leaders are bracing for chaos as they consider guardrails to protect their content from AI-driven aggregation and disinformation.

The New York Times and NBC News are among the organizations in preliminary talks with other media companies, major technology platforms and Digital Content Next, an industry organization for the digital news trade, to develop rules around how their content can be used by AI tools in natural language. familiar with the matter.

The latest trend – generative artificial intelligence – can create seemingly novel blocks of text or images in response to complex queries, such as “Write an earnings report in the style of the poet Robert Frost” or “Draw a picture of an iPhone by Vincent Van Gogh”. .”

Some of these generative AI programs, such as Open AI’s ChatGPT and Google Bard, are trained on the large amount of publicly available information from the Internet, including journalism and copyrighted art. In some cases, the material created is actually pulled almost verbatim from these sources.

Publishers fear that these programs could undermine their business models by publishing rehashed content without credit and creating an explosion of inaccurate or misleading content that would reduce trust in news online.

Next digital content that presents more than 50 of America’s largest media organizations including parents The Washington Post and The Wall Street Journal News Corp., this week published seven principles for “Development and Management of Generative Artificial Intelligence”. They deal with issues of security, compensation for intellectual property, transparency, accountability and justice.

The principles are intended to be a pathway for future discussion. These include: “Publishers are empowered to negotiate and receive fair remuneration for the use of their IP” and “GAI systems deployers should be held accountable for system outputs” rather than industry-defining rules. Digital content Next shared the policy with its board and relevant committees on Monday.

News stations struggle with AI

Next Digital Content “Principles for the Development and Management of Generative Artificial Intelligence”:

  1. GAI developers and implementers must respect the rights of creators to their content.
  2. Publishers are entitled to negotiate and receive fair compensation for the use of their IP.
  3. Copyright laws protect content creators from unlicensed use of their content.
  4. GAI systems should be transparent to publishers and users.
  5. Implementations of GAI systems should be held accountable for system outputs.
  6. GAI systems should not create or risk creating unfair market or competitive outcomes.
  7. GAI systems should be secure and address privacy risks.

The urgency behind creating a system of rules and standards for generative artificial intelligence is intense, said Jason Kint, CEO of Digital Content Next.

“In my time as CEO, I’ve never seen something go from an emerging problem to dominating so many work streams,” said Kint, who has led Digital Content Next since 2014. “We’ve had 15 meetings since February. all types of media.”

As generative artificial intelligence develops in the coming months and years, it will dominate the media conversation, said Axios CEO Jim VandeHei.

“Four months ago I wasn’t thinking or talking about AI. Now it’s all we talk about,” VandeHei said. “If you own a company and AI isn’t something you’re obsessed with, you’re crazy.”

Lessons from the past

Generative artificial intelligence presents both potential efficiencies and threats to the intelligence business. This technology can create new content – ​​such as games, travel lists and recipes – that provide consumer benefits and help reduce costs.

But the media industry is just as concerned about AI threats. Digital media companies have taken notice of their business models in recent years, especially social networks and search companies Google and Facebook, reaped the benefits of digital advertising. Vice declared bankruptcy last month and the news site BuzzFeed the stock has been trading below $1 for more than 30 days and the company received a Nasdaq delisting notice.

In this context, media leaders such as IAC Chairman Barry Diller and News Corp. CEO Robert Thomson is pushing big tech companies to pay for any content they use to train AI models.

“I continue to be amazed that so many media companies, some of which are now deadlocked below the waterline, have been reluctant to defend their journalism or reform what is clearly a dysfunctional digital advertising market,” Thomson said during his opening remarks at the News Media Association’s International News Media World Congress in New York on May 25.

During the Semaphore conference in New York in April, Diller said the news industry needs to come together to demand payment or face a copyright lawsuit sooner rather than later.

“You have to get the industry to say you can’t shred our content until you develop systems where the publisher gets some way to get paid,” Diller said. “If you actually take them [AI] systems and you don’t connect them to a process where there’s some way to get compensated for it, all will be lost.”

Fighting misinformation

Beyond balance sheet issues, the most important concern of AI news organizations is to alert users to what is real and what is not.

“Broadly speaking, I am we are optimistic about this technology as for us, with the big caveat that this technology poses huge risks to journalism in terms of verifying the authenticity of content,” said Chris Berend, head of digital broadcasting at NBC News Group, who added that he expects AI will work. alongside human beings in the newsroom rather than replacing them.

There are already signs of AI’s potential to spread misinformation. Last month, a Twitter account named “Bloomberg Feed” was verified tweeted a fake photo explosion at the Pentagon near Washington, DC Although this photo was quickly revealed to be a fake, it led to a brief drop in stock prices. More advanced fakes could cause even more confusion and cause unnecessary panic. They could also damage brands. “Bloomberg Feed” had nothing to do with the media company Bloomberg LP.

“It’s the beginning of what will become a hellfire,” VandeHei said. “This country is going to witness a massive proliferation of mass waste. Is it real or not real? Add this to a society that is already thinking about what is real or what is not.”

The U.S. government can regulate Big Tech’s AI development, but the pace of regulation is likely to lag behind the speed at which the technology is being used, VandeHei said.

This country will witness a massive proliferation of mass waste. Is it real or not real? Add this to a society that already thinks about what is real or not real.

Tech companies and newsrooms are working to combat potentially destructive AI like the one recently invented photo of Pope Francis wearing a large inflatable coat. Google said last month encodes information that allows users to decipher whether an image is created by AI.

DisneyABC News “already has a team working around the clock to verify the veracity of online video,” said Chris Looft, coordinating producer for visual verification at ABC News.

“Even with AI tools or generative AI models that work on text like ChatGPT, it doesn’t change the fact that we’re already doing this work,” Looft said. “The process remains the same, combining reporting with visual techniques to confirm the veracity of the video. That means picking up the phone and talking to eyewitnesses or analyzing metadata.”

Ironically, one of the first uses of artificial intelligence to take over human work in the newsroom may be to fight the AI ​​itself. NBC News’ Berend predicts an “AI policing AI” arms race in the coming years as both media and tech companies invest in software that can properly sort and mark the real from the fake.

“The fight against disinformation is a battle of computing power,” Berend said. “One of the main challenges when it comes to verifying content is technological. It’s such a big challenge that it has to be done through partnerships.”

The confluence of rapidly evolving powerful technologies, input from dozens of major companies and U.S. government regulation has led some media executives to privately acknowledge that the coming months could be very chaotic. We hope that today’s age of digital maturity can help get to solutions faster than in the earlier days of the Internet.

Disclosure: NBCUniversal is the parent company of the NBC News Group, which includes both NBC News and CNBC.

WATCH: We must regulate generative artificial intelligence

Source Link