Using generative AI in healthcare comms - our promise to our clients
It’s safe to say one of the hottest topics of 2023 has been ChatGPT and the launch of other Generative AI-driven applications. So, we understand why people want to know our stance on the matter.
AI-enabled software and services aren’t new and, like for many businesses, have been a subtle but staple feature of our internal systems for several years. But, the power and ability of gen-AI models to release self-created content based on processing vast databases of public content goes far beyond just speeding up administrative systems. Its possibilities to expand knowledge, creativity and user experience are something to marvel at, but with caution. As Pete Trainor said, “AI outputs are just opinions embedded in and amplified by code”, which means these models risk amplifying our mistakes and our biases, as much as our progress, if we’re not careful with what we put in and how we use it.
That’s why we’re sharing these promises, so that when we say we’re experimenting with using AI, you know we’re still committed to remaining compliant, ethical and inclusive.
1. We are developing robust processes and governance
When ChatGPT was first released, we paused any use of it until we’d put in place an ethical framework. We established an AI working group with representation from across departments, and then after developing clear guidance, formally trained all staff on how to use generative AI tools appropriately and compliantly.
Having communicated how our team can get started and put in place a process to monitor its use, we continue to iterate and expand in a carefully controlled fashion that is safe for us and our clients.
2. We remain trusted advisors and securely handle personal data and intellectual property
No private or confidential information is entered into open platforms (like ChatGPT), ever — even if chat history is off as it’s not enough of a guarantee that our data would be safe. Therefore, anything confidential or IP stays in-house. If it’s not in the public domain already, we won’t put it there.
3. Our work will always remain authentic
AI tools support our work, they don’t create it. Our use of AI will be transparent and clearly communicated.
We use AI tools to boost our strategic thinking, inspire ideas and assist with identifying patterns and trends, but we don’t just blindly trust or accept what we’re given. Our highly skilled and experienced team of researchers, medical writers, copywriters, art directors, designers and educators rigorously check the inputs and then do what they do best to create trusted content, with its empathetic voice and unique creative look and feel.
4. AI will never replace the patient voice in our work
We know we can’t rely on AI to be inclusive or diverse on our behalf, just like many of the freely available datasets in healthcare. Considering the known issues around diversity in data, seldom-heard voices may be even harder to find within AI algorithms. This is why we’ll always use a full range of sources to ensure we’re fairly capturing all perspectives on an issue — even if that means going directly to the audience themselves and hearing their feedback first-hand.
We are committed to bringing the voices of patients and HCPs into our work and will continue to advocate for this as a part of the process when developing strategies for our clients.
5. We’ll continue to innovate, the right way
Our diverse AI working group is continuously monitoring our activity and new developments in the market as well as keenly following new developments in policy and governance in the generative AI space.
We won’t adopt new technologies and platforms just for the sake of it or just because everyone else is. Our focus will continue to be on reviewing and considering what’s out there and setting it against our client’s needs, and what matters to us as a group.
Will it help us find seldom-heard voices? Will it improve the quality and impact of our work? Will it help us improve the efficiency of our work in an ethical and compliant way? If the answer is ever no, we won’t use it. Instead, we will continue to keep looking at appropriate uses whilst remaining compliant and respecting the confidential nature of our client's work.
If you’d like to find out more about the work we have been doing to develop our approach to using AI, and how it could support your projects, let’s talk.