https://www.linkedin.com/pulse/reflections-risjs-ai-future-news-conference-2025-felix-m-simon-94jue?utm_source=share&utm_medium=member_ios&utm_campaign=share_via
Felix M. Simon
Research Fellow in AI and News, Reuters Institute, University of Oxford | Research Associate and DPhil, Oxford Internet Institute | Affiliate, Tow Center & UNC CITAP
Published Mar 28, 2025
+ Follow
This week, my colleagues and I organised an event on AI and news at Reuben College, where we brought together some of our own researchers and colleagues from the university and the news industry.
Thanks to our fabulous communications team, there is a full write-up of the day on our website, together with videos from all sessions:
https://reutersinstitute.politics.ox.ac.uk/news/ai-and-future-news-2025-what-we-learnt-about-how-its-impact-coverage-newsrooms-and-society
Some of my own reflections…
- A point I realised was (thanks to Matt Rogerson ) is that, despite limited transparency about AI deals in research and public circles, the key decision-makers and regulators actually have the details of those deals. However, I believe it is a significant problem that this has not been more widely reported and that we know very little about these details and cannot properly independently assess them.
- I’ve been on record for a while that the financial benefits for news companies from data deals, particularly for training data, are likely limited. Andrew Strait pressed this point, too: “You’re offering a one-time value for the tech company on a deal that’s going to expire, and after that point, it kind of loses its value... […] So this is a kind of weird situation where news companies are signing five-year deals with this information, but after five years, I doubt they’ll be renewed because they don’t need that information for training as much as they once did.”
- Despite all the brouhaha about progress in AI systems, people like Sannuta Raghu and Jazmín Acuña remind us that AI models still exhibit limitations in handling diverse languages and dialects, which poses challenges not just for the news but is also societies at large and is something various governments, including India’s, are acutely aware of. This was also pointed out by Roxana Radu , who highlighted significant international disparities in access to and development of AI, with varying levels of national strategies and capabilities.
- Looking at the integration and adoption of AI, the benefits are clear: helping journalists do deeper work, enabling things that otherwise would not have been possible (demonstrated by Dylan Freedman ’s experience of using AI in investigative work or Nathalie Malinarich ’s demonstration of AI use at the BBC), and doing more with less – as long as they are used transparently (with Katharina Schell providing good perspectives from her RISJ project work). The challenges are managing the combination of people, the existing technology stack, and the idiosyncrasies of newer AI models, plus regulation (e.g. data protection). Increasingly, the focus is also on being able to assess how and where AI works (and being confident in one’s ability to do so). Reader impact, for example, is easier to measure than the effect of AI on internal workflows.
- While we have some research showing significant error rates in AI-generated news answers as well as attribution issues (with a shoutout to the work of Klaudia Jaźwińska at Tow), examples from AI use inside newsrooms are in some cases more hopeful. Liz Lohn from the Financial Times mentioned that when testing their AI for bullet point summaries, “looking at how much the editors need to change it, no one needed to change anything factual” – quite remarkable and demonstrating that well-tuned and carefully tested systems using internal data can work well. Liz made a point that from experience, having a human-in-the-loop is important, but also a hindrance (I wrote about this here: https://reutersinstitute.politics.ox.ac.uk/news/neither-humans-loop-nor-transparency-labels-will-save-news-media-when-it-comes-ai).
- It was also terrific that Chris Summerfield, Vicki Nash and Roxana Radu were able to join us to provide some wider context on AI. Chris suggested that AI should be viewed as a tool for moulding and shaping interactions among humans, rather than something that will replace or think like us, and that we are currently transitioning from text-based to multimodal systems (relying on audio and video) and from systems that merely respond to those that can take actions. Vicki Nash observed that the most interesting or “best” uses of AI often come with the biggest risks. For example, she highlighted the potential for personalisation and contextualisation to offer valuable answers and resources but cautioned about the risk of narrower worldviews and overly mediated experiences. She also highlighted the key role media play in helping people understand the risks and opportunities around the technology – a good reminder for everyone in the news of their role in all this
Finally, one takeaway is that we need to be quite careful in how we think about AI. It’s easy to assume that people will not like AI or that because AI systems do not always work, or are really flawed, or ethically questionable, it means that people in general will not find them useful and will stop adopting them. This is not the trend we see. It’s an uncomfortable reality for some, for sure, as this is where normative and legal considerations rub against the empirics – but a reality nonetheless.