The Los Angeles Times, under the ownership of billionaire Patrick Soon-Shiong, is introducing an AI-powered “bias meter” to label opinion articles based on their perceived political leanings. This move comes amidst ongoing turmoil at the newspaper, including layoffs and accusations of editorial interference by Soon-Shiong himself. The new system, developed by Particle.News, categorizes articles as “Left, Center Left, Center, Center Right, or Right.” Additionally, an AI tool from Perplexity will offer alternative viewpoints on opinion pieces in a section called “Viewpoints.”
This AI integration extends to any article offering a perspective on an issue, broadening its scope beyond traditional opinion pieces. These perspective-driven articles will be distinguished with a “Voices” label, while standard news reports will remain unaffected by the AI features. Soon-Shiong frames this initiative as a way to enhance trust in media, a claim met with skepticism given his own history of editorial involvement.
While the goal of fostering trust is laudable, Soon-Shiong’s track record raises concerns. Previous incidents, such as altering an opinion piece about Robert F. Kennedy Jr. and interfering with an article about a friend involved in a dog attack altercation, have eroded confidence in his commitment to unbiased reporting. The Los Angeles Times Guild, while open to presenting alternative viewpoints, criticizes the use of AI for this purpose. Their concerns stem from the lack of editorial oversight for AI-generated content and the potential for inaccuracies, exemplified by ChatGPT’s recent Oscars blunder. Furthermore, the Guardian reports instances where the AI tool suggested viewpoints already present within the article itself. The Guild argues that the resources allocated to this AI initiative could have been better used to support journalists facing stagnant wages.
The broader implications of widespread AI-generated content are also troubling. The proliferation of AI-authored articles, often containing fabricated information, risks polluting the internet with misinformation. This misinformation, in turn, could become ingrained in future AI models, perpetuating a cycle of inaccuracy. The question remains: how much of this fabricated content will find its way into the Times’ alternative viewpoints section?
Perplexity’s approach to journalism adds another layer of complexity. The company has faced criticism for scraping articles and republishing them within its chatbot, invoking fair use as justification. During the New York Times strike, Perplexity’s CEO even offered AI tools as a potential replacement for striking journalists. Given this precedent, some fear Soon-Shiong might expand the use of AI to generate entire articles, further diminishing the role of human journalists.
Other news organizations, including The Washington Post, are also experimenting with AI, albeit in less controversial ways, such as automated article summarization. However, The Post is also facing its own challenges, with owner Jeff Bezos increasingly exerting control and shifting the opinion section towards a pro-capitalist stance, resulting in substantial subscriber losses.
The initial hope that billionaire owners would rescue legacy media from the internet’s disruptive impact has proven naive. The current reality reveals the inherent conflict of interest when billionaires with diverse agendas control news outlets. The prioritization of appeasing powerful figures like President Trump, potentially for business advantages, further underscores this concern.