The brave new world of AI in journalism

Jul 22, 2023, 4:04 PM UTC
3 mins read
The brave new world of AI in journalism
(Photo by Adeolu Eletu on Unsplash)

In recent times, the news industry has been abuzz with discussions about the integration of large language models (LLMs) into journalism. Technologists and media executives are fervently exploring ways to leverage AI to revolutionize news production. Despite some early setbacks, this cutting-edge technology shows promise in transforming the way news stories are crafted, but challenges lie ahead.

The New York Times recently revealed that Google has been approaching major media organizations like The Washington Post and News Corp with a compelling proposition. The tech giant aims to develop an AI-based tool that would assist journalists in generating news stories. According to Google spokesperson Meghann Farnsworth, the initial concept revolves around AI-enabled tools that could offer journalists options for headlines or various writing styles. The goal is to empower journalists to enhance their work and productivity, much like Google’s assistive tools in Gmail and Google Docs.

OpenAI, one of the pioneers in AI development, is making significant strides in the journalism domain. The startup recently struck a $5 million deal with the American Journalism Project to provide local news outlets with wider access to their GPT-4-based API. Additionally, The Associated Press forged a similar partnership with OpenAI, allowing the newswire access to the company’s cutting-edge AI tools in exchange for licensing some of its archives to aid in training their LLM.

Notwithstanding the ambitious visions surrounding AI integration, there have been some embarrassing blunders that warrant scrutiny. Tech site CNET‘s experiments with AI-produced journalism yielded a flood of articles brimming with inaccuracies, as reported by Futurism. Similarly, G/O Media, the publisher behind popular outlets like Gizmodo and Jezebel, faced severe backlash after publishing AI-generated stories riddled with factual errors. The lack of transparency and accountability in AI-generated content has raised concerns among journalists and readers alike.

It is crucial to acknowledge that automation has played a role in journalism for nearly a decade. Publishers have employed automation to streamline repetitive tasks such as generating sports recaps and earnings reports. However, LLMs represent a distinct leap in technological capabilities. Unlike previous forms of automation, LLMs can produce content from scratch, leaving little insight into the rationale behind word choices and offering limited accountability for accuracy and originality.

Despite the early hiccups, media and tech executives remain committed to exploring AI’s potential in newsrooms. The allure of increased efficiency and expanded content creation possibilities drives this pursuit. However, the path to seamless AI integration poses significant challenges that demand attention.

One primary concern with AI-generated content is the lack of transparency. Journalists and readers often struggle to discern whether a human or AI authored a particular article. The opacity surrounding the AI’s decision-making process raises questions about bias and credibility, ultimately impacting readers’ trust in the news.

The art of storytelling lies in human creativity and authentic expression. While AI can churn out words at a remarkable pace, it may struggle to replicate the nuance and emotional depth that human writers can convey. Maintaining the integrity of journalism amidst the introduction of AI calls for a careful balance between human and AI collaboration.

As AI increasingly aids in content creation, the role of journalists may undergo a transformation. Journalists may need to pivot from content generation to curating, analyzing, and interpreting AI-produced news stories. This evolution demands a reevaluation of journalistic practices and ethical considerations.

Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x