Categories
Watch this space

Reflections on AI for communications

Scott and Kathryn share their thoughts on an AI-themed workshop to the comms team of the National Institute of Health Research and flag some key topics for further discussion.

Last week Kathryn and I had the pleasure of facilitating an AI-themed workshop to the comms team of the National Institute of Health Research. We want to share our initial thoughts following the workshop and flag some key topics for further discussion.

(An AI-generated chatbot, created using Adobe Firefly).

During the discussions with the comms team, they were concerned with the challenges of unravelling the hype from the practical benefits to teaching and learning that AI poses. There is a lot of the former, but, equally, we don’t want to turn practitioners off to exploring the possibilities of AI with their students either.

With its roots going back to over half a century I think it’s fair to say AI is not new, but there has definitely been a surge in interest with the rise of generative AI platforms, like Chat GPT. Despite AI’s ubiquity, many of these applications go unnoticed, however. Partly because they have become seamless tools in our everyday lives, whether that’s a humble email spam filter taking out the trash or SIRI playing your favourite song.

We need to see beyond the Chat GPT zeitgeist and look at the bigger picture. It’s only by being critical that we can get the best from AI.

Promptcraft

Part of being critical is about asking the right questions. The termPromptcraft’ or ‘prompt engineering’ is key to getting the best results from generative AI tools. Prompt engineering is akin to an art form, where the choice of words in your input and context can greatly influence the AI’s response.

As a comms professional, prompt engineering can help to simplify your language and generate ideas as starting point for content.

However, even with an understanding of prompt engineering, digital inequality still exists. A 2023 study from the PEW Research Center highlights how AI programs, like Chat GPT, are far more likely to be on the radar of those with higher household incomes and a formal education. The same study paints a mixed picture regarding how useful they found it too.

This not only begs questions around equity of access, but also about how people are using it and what they are asking it to do. The old “garbage in, garbage out” saying springs to mind here.

In the workshop, we discussed how you might define your input and evaluate your output from generative AI tools.

Determining the accuracy of the output is key. You would never solely rely on one source for your research. Is the tone right for your audience and context? Is the language appropriate? How does the output compare to other research? What bias might be present in the response? This might come from the training set, or the way the model was trained. Is the content current or out of date?

It’s only by asking these critical questions can we hope to get the best from generative AI.

The pros and cons of AI generated images

A conversation was sparked around using AI to generate images for communications. Just as generative AI has transformed text-based interactions, it has also made significant strides in generating images. Tools like DALL·E 3, Adobe Firefly and Midjourney generate images from text descriptions in a matter of seconds. Such images can be used to illustrate complex concepts, create engaging imagery or develop custom designs. If you’d like to find out more, Jisc’s National Centre for AI (NCAI) have written a blog post exploring AI image generation.

How is AI being used to spread misinformation?

The technology is evolving quickly and the line between what is real and what is fake continues to blur. AI generated content is already unfortunately being used to promote the spread of misinformation. Last year, a fake AI generated CNN news story claiming climate change is ‘seasonal’ was spread across Tik Tok and shared by thousands of users (Hsu, 2022). We need to get students questioning the validity of the source. Potential agendas or biases need to be considered. Content should always be cross-checked with other trustworthy sources.

Further resources

Follow the work of the National Centre for AI (Jisc)

Read the Generative AI – a primer from the National Centre for AI.

References

Hsu, T. (2022). ‘Worries Grow That TikTok Is New Home for Manipulated Video and Photos’, [Online] NY Times, 4 November. (Accessed: 13 September 2023).

Pew Research Center (2023). Those with higher household incomes, more formal education are particularly likely to know about ChatGPT. (Accessed: 13 September 2023).

By Scott Hibberson

Subject Specialist (Online learning) at Jisc.

Leave a Reply

Your email address will not be published. Required fields are marked *