PR News

Telum Talks To: Matt Cram from Orygen

Written by Telum Media | Oct 6, 2025 7:45:00 AM

In today’s ‘always-on’ information economy, the use of AI tools in mental health conversations is becoming increasingly evident, influencing how healthcare communicators engage, educate, and advocate for wellbeing services.

Telum Media spoke to Matt Cram, Head of Media and Communications at youth mental health institute, Orygen, to explore how AI algorithms are affecting mental health narratives, shaping the value of traditional media, and being leveraged for more trusted, accessible, and impactful mental health discussions.

 
How is the 'always-on' information economy reshaping the way healthcare communicators approach mental health narratives?
In an always-on world, stories about anything, including health, can take off and change shape in minutes. That can be exciting, but it also raises the stakes. We need to keep accuracy and clarity front and centre, and move quickly without feeding misinformation. Balance is important - being responsive without losing responsibility and being agile, while keeping messaging clear and concise.

What hasn’t changed are the building blocks of good communication: case studies, lived experience stories, and strong evidence, these are still what resonate with audiences. If anything, they’re even more important now because they anchor our messages when conversations are moving so fast.

And while the information economy is always on, our brands don’t need to mimic that. This was one of the big lessons in my move from journalism to health communications. We need to be 'always on' in terms of listening and monitoring, but we don’t need to be constantly talking. If you try to fill every moment, you risk becoming part of the noise. Choosing when to show up - and making sure you actually add value when you do - is what keeps your voice trusted and impactful.

How is AI changing the role of earned, traditional, and regulated media in mental health communications - both in terms of amplifying their value and potentially undermining it?
Earned and traditional media still matter enormously. Many AI algorithms are trained on, or fuelled by, this content. When your message lands in a credible outlet, it’s not just reaching readers and viewers, it’s also feeding the machines that will repackage and serve that information to new audiences. All the more reason to make sure what you put out is accurate and on message.

At the same time, AI is blurring the line between what’s legitimate and what’s confected. That risk only makes trusted voices more valuable. When people are swimming in AI-generated noise, established, regulated media serve as touchstones of credibility.

On the flip side, AI has made the creation of strong digital content easier and more accessible than ever. That’s a warning shot to traditional media: if they don’t keep evolving their relevance and depth, they risk being bypassed by brands and organisations, who can now publish credible content directly to audiences.

And of course, it’s all moving so fast that some of what I’ve just said could be obsolete within a month - that’s how quickly the landscape is shifting.

As AI becomes more embedded in healthcare comms, particularly around mental health, what strategies have you adopted to preserve the human empathy and tone that are essential for building trust?
AI can be a useful support tool, but it can’t replace the way people connect through empathy, creativity, and lived experiences. When everything starts to sound the same, when outputs drift toward homogeneity, that’s exactly when distinctive human voices stand out most. The stories that cut through are the ones only a person could tell - with warmth, context, and care.

For me, it’s about using AI in the background rather than the foreground. It can help with brainstorming and sense-checking, but there’s always a human review before anything goes out - especially in mental health, where tone and nuance really matter.

And the other piece is making sure lived experience is central in your messaging. That’s not something you can outsource to an algorithm. The empathy and authenticity that come from people sharing their realities are irreplaceable and keep trust at the heart of all communications - from the PR to the mental health training and resources our organisation delivers for young people, parents, and clinicians.

What practical strategies can healthcare communicators use to influence generative AI and search to ensure that credibility, accuracy, and public interest are prioritised?
One of the biggest things we’ve learned is that AI doesn’t just surface information, it consumes it. That means the way we structure and present content really matters.

Using plain language, clear evidence, and formats that AI can easily ingest helps ensure reliable information is what gets pulled through. In the Orygen communications team, we talk about ‘telling the internet who we are’. Every digital publishing opportunity adds to our footprint, so we find as many as we can and treat each one in a deliberate and unified fashion - another chance to show a consistent, credible brand.

Partnerships are just as important. Working with regulators, researchers, and health organisations makes it more likely that credible, well-sourced material is not only published but also picked up and prioritised. It strengthens both the signal and the source.

But again, if I read like I’ve got all the answers, I don’t. We’re all constantly learning here. The way AI and search behave is shifting so quickly that the best strategies today might look very different tomorrow, and that’s why one of the important tasks for communicators in 2025 is to constantly stay abreast of the latest trends and news in this space.

There’s truth in this saying: “AI isn’t coming for everyone’s job, it’s coming for the jobs of people who don’t know how to use AI.”