Throughout its more than 130-year history, the Financial Times has upheld the highest standards of journalism. As editor of this newspaper, nothing is more important to me than readers’ trust in the quality journalism we produce. Above all, quality means accuracy. It also means fairness and transparency.
That’s why today I’m sharing my current thoughts on the use of generative artificial intelligence in the newsroom.
Generative AI is the most significant new technology since the advent of the internet. It is evolving at breakneck speed and its applications and impact are still nascent. Generative AI models learn from vast amounts of published data, including books, publications, Wikipedia, and social media pages, to predict the most likely next word in a sentence.
This innovation is an increasingly important area of coverage for us and I am fully committed to making the FT an invaluable source of information and analysis on AI for years to come. But it also has obvious and potentially far-reaching implications for the way journalists and editors go about our day-to-day work, and could help us analyze and discover stories. It has the potential to increase productivity and free up time for reporters and editors to focus on creating and reporting original content.
Although they seem very understandable and plausible, the AI models available on the market today are ultimately a prediction engine and learn from the past. They can make up facts – it’s called “hallucinations” – and make up references and links. With sufficient manipulation, AI models can produce completely false images and articles. They also reproduce existing societal perspectives, including historical prejudices.
I believe our mission to produce journalism of the highest caliber is even more important in this age of rapid technological innovation. At a time when misinformation can be quickly generated and spread, and trust in the media in general has declined, we at the FT have a greater responsibility to be transparent, report the facts and pursue the truth. Therefore, in the new AI age, FT journalism will continue to be reported and written by people who are the best in their fields and committed to reporting and analyzing the world as it is accurately and fairly.
The FT is also a pioneer in digital journalism and our business colleagues will leverage AI to deliver services to readers and customers, maintaining our track record of effective innovation. Our newsroom must also remain a hub for innovation. It is important and necessary for the FT to have a newsroom team that can responsibly experiment with AI tools to assist journalists in tasks such as data mining, text and image analysis, and translation. We will not publish AI-generated photorealistic images, but we will explore the use of AI-enhanced visuals (infographics, charts, photos) and make it clear to the reader when we do. This does not affect the artists’ illustrations for the FT. The team will also consider the summarization capabilities of generative AI, always under human supervision.
We will be transparent within the FT and to our readers. All experiments in the newsroom are recorded in an internal register, including, where possible, the use of third parties who may use the tool. Our journalists are trained in a series of masterclasses in using generative AI for story recognition.
Each technology opens up exciting new frontiers that must be explored responsibly. But as recent history has shown, the excitement must be accompanied with caution over the risk of misinformation and the falsification of the truth. The FT will stay true to its fundamental mission and keep readers updated on the evolution of Generative AI itself and our thinking on it.