As news media companies, one of our core value propositions – and value – is the promise of accuracy. This differentiates us from many other vectors of information in society. Whether you call them viewers, listeners, readers or users, people turn to us – rather than protagonists directly or ‘some guy on social media’ – when they want news that has passed a certain test of professional scrutiny.
What generative AI promises to do is to create a lot of content, quickly. And at present, it doesn’t promise accuracy: It may be interesting at times, it may be impressive and clever, but it’s not accurate to the kind of degree where its capabilities intersect with the promise of what professional news media proposes.
But this doesn’t mean it’s not worth evaluating where generative AI tools are already able to deliver value to our organizations. There are two dimensions that we can be looking at generative AI to bring value. To paraphrase Agnes Stenbom, the head of the innovation lab IN__LAB at Schibsted, the global media group based in Scandinavia, we need to look at both exploitative and explorative usage for generative AI.
Your human journalists can investigate, identify sources, establish credibility and relevance of information; they can interview, shape and re-shape the direction of an interviewee’s answers when they seem to want to obfuscate or mislead. These are complex, high-intellect tasks that require highly developed intelligence, and, in particular, certain traits of intelligence that our current AI entirely lacks. Causality, anticipation, theory of mind, emotions, cultural context and connection – these are only some of the important dimensions of our human intelligence that make the way we process information infinitely more layered than what a computer can do.
But at the exploitative end of where generative AI can help us, the opportunities for media organizations is that there is plenty of rote, high-effort and somewhat lighter intellectual tasks that really, no one in your organization is too thrilled to take on: summarizing, proposing content for testing, optimizing content for SEO, rewriting content with certain tone characteristics (“simplify the language for a 10 year old”) or for certain length.
With these tasks, and while the output isn’t perfect, there is usually a good measure of labor that the AI can furnish, and human participation in the process (called human-in-the-loop in system parlance) is about validating, or invalidating, that the content is fit for consumption. In this way, the place of generative AI tools today is best understood as an internal assistant for your teams, rather than a replacement for them.
But then, there is the exploratory end of the opportunity spectrum.
Try and think that generative AI is maybe not, in the long term, about creating the same type of content, format and production for what we do today – just done by a robot. And there are a lot of types of content, medium or formats that most news media organizations can’t serve because their resources simply don’t allow it. Whether translation, delivering text over voice (including the voice of one of your own hosts, the way Aftenposten, in Norway, trained an AI on the voice of one of their hosts to deliver the news in a voice their users were familiar with), adapting or supplementing content with explainers, with catch-up features or proposing a language that is fitted to the age level or preferred style of a user group.
There are already media companies making high-value bets on generative AI. Bloomberg for example created BloombergGPT – their own Large Language Model (the intelligence behind a generative AI). Owning your own LLM for well-defined use cases gives them a chance to build a smaller, more tailored language which may deliver significant upsides versus using a commercial model meant to serve general use cases. It also makes them more independent from technology vendors. This is the higher end of the type of experimentation that media companies are taking on with generative AI.
More discrete opportunities abound in finding efficiencies throughout your organization. Ekstra Bladet, the Danish publisher, looked at how generative AI could advance headline testing. The large media group Gannett in the U.S. is building capabilities for generating weather stories.
It remains that the pitfalls are numerous. I’ve mentioned the AI’s hallucinations (errors are what they truly are), but there is also the very real concern that you may be committing plagiarism by using a tool whose production may be challenged for overreaching what courts may consider Fair Use. You have to be concerned about how you may disclaim the presence of AI, and what this means for your brand to off-load the work of knowledge creation to a robot when, in the first place, the authority of humans in knowledge creation was the core value proposition of a media organization. Internally, bringing generative AI into your organization will also bring various management challenges. There are likely to be licensing and copyright questions. And you have to be concerned about some of the data privacy implications of using these technologies, particularly when you are bringing in third parties to power your tools.
I am a technology hopeful by nature, but I recognize that this perspective is often not too far from foolishness. In this case, foolishness would be to not appreciate the very real risks to overestimating the capabilities of this young technology and the damage it can do to both products, brands, and, in fact, in society at large. Where we advance with clear eyes, we can, hopefully, be judicious in our usage of this new technology and, cautiously, leverage its upsides.