Last updated: 8 May 2025
As AI tools and capabilities develop, this guidance, and our approach to refining it, may adapt. Suggestions and feedback are very welcome to the Head of Digital Campaigns and Communications.
AI tools are changing the way we work at Oxford, whether that’s improving the quality of our written communications or helping speed up our research tasks. But it is important that we work together to ensure that their use aligns with our values and our obligations.
Oxford is committed to setting a clear, confident standard for the responsible use of AI in communications. These guidelines outline expectations for the use of generative AI tools by communicators at Oxford. Technology and approaches change rapidly; we will aim to test and review these on a regular basis and update as necessary. Our aim is not to be restrictive, but rather to ensure those using these tools feel confident in the boundaries of what is acceptable in our community. These guidelines were developed with input from Divisional Communications Leads, the AI Competency Centre and Information Security.
The guidelines can be read alongside the Russell Group principles on the use of generative AI tools in education, which support the development and use of AI in a way that enables their ethical use and upholds academic rigour and integrity.
Scope: context
This guidance has been developed for the use of staff in the Public Affairs Directorate. However it is written for any staff member employed as communications professionals at the University of Oxford or working with communications in their role, and we encourage its adoption across the profession. It may be of use to anyone who wishes to consider the use of GenAI in their communication and content generation.
Scope: technology
This guidance relates to the use of generative AI (GenAI) tools — artificial intelligence systems that simulate creativity by predicting and assembling outputs based on learned patterns from the data they have been trained on, not human comprehension. Their results are intended to appear plausible but can contain inaccurate information or off topic hallucinations, requiring careful human oversight. This includes tools built on large language models (LLMs), such as ChatGPT, Claude, Gemini or Copilot, as well as those that generate images, audio or video, like Midjourney, Notebook LM or Sora.
GenAI is increasingly embedded within the platforms and systems we already use, enhancing functionality and speeding up tasks when used effectively. For this reason, it is important to remember that GenAI and LLMs do not comprehend information in any way akin to humans. Their neural networks can generate outputs that are factually inaccurate and misleading.
This guidance applies to both standalone GenAI tools and AI-powered features within other software or platforms.
It does not cover all forms of artificial intelligence used across the University – only tools that generate content. However, some of the broad principles outlined here may be useful in other contexts too.
Understanding what AI is – and what it isn’t – is critical for its effective use. Here are a few helpful guides:
Summary
Our guidelines can be summarised by the following principles:
- We prioritise human creativity, curiosity and judgement.
- Oxford's reputation stands on the trustworthiness of our research and our communications. We are transparent with each other and with our audiences about our use of AI and ensure that AI is not used to conceal or alter original intent or meaning.
- As communications professionals, we are responsible for the quality and accuracy of the content we produce. The use of any AI tool should therefore be seen as a supportive mechanism to create value and enable productivity, with the recognition that outputs from generative AI are susceptible to bias, mistakes and misinformation. All AI-assisted outputs must undergo a human review for factual accuracy, appropriate tone and ethical integrity before public release.
- We work to ensure we have the skills to use the right GenAI tools in the right way, working to understand the context of the tools we’re using and the benefits and risks they offer. This includes using tools appropriately, in line with University guidance, and taking appropriate data and security precautions.