I drafted this a few days before the end of 2023, and I since I spent some time on it, I wanted to publish it here. So here it is. Don't forget to subscribe to my site for more, and if you like this piece, please share or let me know what you think at the end of this post.
The New York Times lawsuit filed the last week of December against OpenAI and Microsoft reveals very troubling allegations of copyright infringement, but more importantly, it marks the beginning of a journalism landscape where those who produce original, credible work could materially benefit, resulting in new revenue streams for an industry in dire need of a major transformation.
It is hard to argue against the 69-page complaint, which claims that millions of Times stories were used to train generative AI chatbots without authorization. That training, the suit alleges, created systems that are now in direct competition with the Times. In other words, OpenAI and Microsoft are making money by stealing original stories without any attribution, a journalism mortal sin.
The hope was to have both parties reach a licensing agreement before the lawsuit. However, while OpenAI made deals this year with the Associated Press and German company Axel Springer, the Times was not ready to capitulate, and rightly so.
“Through Microsoft’s Bing Chat (recently rebranded as ‘Copilot’) and OpenAI’s ChatGPT,” the lawsuit notes, “Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.”
The stakes are high. Last year, a report from Bloomberg Intelligence estimated that “the generative AI market is poised to explode, growing to $1.3 trillion over the next 10 years from a market size of just $40 billion in 2022.” Meanwhile, the news business is reeling, resulting in the most job cuts of the last three years, as reported this week by Poynter. (As a former president of an award-winning nonprofit media company, I can attest that 2023 was the most challenging year of my 15 years in digital journalism.) The new year hasn't started great for digital journalism as well, with news this week that startup The Messenger is laying off staff due to cash flow problems.
The lawsuit comes at the most critical time for an industry that has struggled ever since the internet went mainstream. The Times is the first American news organization ready to take on the fight, and by making this an issue of copyright theft, it places the burden on OpenAI and Microsoft to admit that it wasn’t stealing the original works in the interest of providing vetted and trustworthy information.
It’s no surprise then that part of OpenAI’s initial statement about the lawsuit expressed an opening instead of more litigation, suggesting that the Times just began a series of negotiations that will create substantial funding sources not only for the Times but for any of the media companies or journalists who will follow a similar strategy.
“We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from A.I. technology and new revenue models,” OpenAI spokesperson Lindsey Held told the Times. “We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”
It’s not as if journalists are against generative AI. Close to 75 percent of news organization respondents believe these new technologies “present new opportunities for journalism,” with 85 percent of respondents having used generative AI to some degree “to help with tasks such as writing code, image generation and authoring summaries,” according to a 2023 global survey from the London School of Economics and Political Science. Still, the same survey noted that 60 percent of respondents “noted their concern about the ethical implications of AI on journalistic values including accuracy, fairness and transparency.”
For that reason, journalists must call out examples of AI going bad, just like when the outlet Futurism exposed Sports Illustrated for publishing "stories" by fake AI-generated writers. Several executives from SI’s publishing company were eventually let go, and while the scandal was not listed as the specific reason, it provided some sense of vindication.
Then there is the case of Channel 1, a news network scheduled to premiere in 2024 with anchors generated from AI. Channel 1 promises that it will use trusted news sources for its AI anchors to report on, but it still feels a bit too creepy and untrustworthy.
That is where the problem truly lies. No one knows where this will go because ethics are always the last thing to consider when new technologies grow at such a blistering pace. The Times lawsuit put some necessary brakes on what will become an eventual reality—that generative AI will be a part of journalism, but how much and who gets paid for what is still open to vigorous debate. What cannot happen is to let AI advocates who are not journalists dictate the debate.
As the Times noted in its story about the lawsuit, venture capital firm Andreessen Horowitz has already decried concerns to the U.S. Copyright Office as being bad for business, saying in a statement that any attempt to honor copyrighted works would “either kill or significantly hamper their development.”
There is a lot of money on the table but with the right ethical push and a lawsuit that has amplified why it is important to acknowledge credible journalism —just like book publishing or other forms of original writing— the future can be prosperous for all involved.
“Independent journalism is vital to our democracy. It is also increasingly rare and valuable,” the beginning of the Times complaint says.
It’s time to ensure that suddenly doesn’t vanish.