Guidelines on the use of generative AI

Last updated: 8 May 2025

As AI tools and capabilities develop, this guidance, and our approach to refining it, may adapt. Suggestions and feedback are very welcome to the Head of Digital Campaigns and Communications.

AI tools are changing the way we work at Oxford, whether that’s improving the quality of our written communications or helping speed up our research tasks. But it is important that we work together to ensure that their use aligns with our values and our obligations. 

Oxford is committed to setting a clear, confident standard for the responsible use of AI in communications. These guidelines outline expectations for the use of generative AI tools by communicators at Oxford. Technology and approaches change rapidly; we will aim to test and review these on a regular basis and update as necessary. Our aim is not to be restrictive, but rather to ensure those using these tools feel confident in the boundaries of what is acceptable in our community. These guidelines were developed with input from Divisional Communications Leads, the AI Competency Centre and Information Security.

The guidelines can be read alongside the Russell Group principles on the use of generative AI tools in education, which support the development and use of AI in a way that enables their ethical use and upholds academic rigour and integrity.

Scope: context 

This guidance has been developed for the use of staff in the Public Affairs Directorate. However it is written for any staff member employed as communications professionals at the University of Oxford or working with communications in their role, and we encourage its adoption across the profession. It may be of use to anyone who wishes to consider the use of GenAI in their communication and content generation.

Scope: technology

This guidance relates to the use of generative AI (GenAI) tools — artificial intelligence systems that simulate creativity by predicting and assembling outputs based on learned patterns from the data they have been trained on, not human comprehension. Their results are intended to appear plausible but can contain inaccurate information or off topic hallucinations, requiring careful human oversight. This includes tools built on large language models (LLMs), such as ChatGPT, Claude, Gemini or Copilot, as well as those that generate images, audio or video, like Midjourney, Notebook LM or Sora.

GenAI is increasingly embedded within the platforms and systems we already use, enhancing functionality and speeding up tasks when used effectively. For this reason, it is important to remember that GenAI and LLMs do not comprehend information in any way akin to humans. Their neural networks can generate outputs that are factually inaccurate and misleading.

This guidance applies to both standalone GenAI tools and AI-powered features within other software or platforms.

It does not cover all forms of artificial intelligence used across the University – only tools that generate content. However, some of the broad principles outlined here may be useful in other contexts too.

Understanding what AI is – and what it isn’t – is critical for its effective use. Here are a few helpful guides:

Summary

Our guidelines can be summarised by the following principles:

  1. We prioritise human creativity, curiosity and judgement.
  2. Oxford's reputation stands on the trustworthiness of our research and our communications. We are transparent with each other and with our audiences about our use of AI and ensure that AI is not used to conceal or alter original intent or meaning.
  3. As communications professionals, we are responsible for the quality and accuracy of the content we produce. The use of any AI tool should therefore be seen as a supportive mechanism to create value and enable productivity, with the recognition that outputs from generative AI are susceptible to bias, mistakes and misinformation. All AI-assisted outputs must undergo a human review for factual accuracy, appropriate tone and ethical integrity before public release. 
  4. We work to ensure we have the skills to use the right GenAI tools in the right way, working to understand the context of the tools we’re using and the benefits and risks they offer. This includes using tools appropriately, in line with University guidance, and taking appropriate data and security precautions.

In general, we may use AI as a supportive tool rather than a way to generate a final product; AI should never be ‘the author’ of anything we publish.

We may use AI tools to help us:

  • research a topic, including helping us understand audiences, research papers and trends, or to provide insight and information 
  • generate ideas or act as a thought partner
  • work on drafts for written content such as press releases, social media posts, articles, reports, etc, with humans always providing input and finalising
  • generate meeting minutes or transcripts using University-approved methods (please note that the use of unapproved AI transcription bots in Teams meetings is not allowed)
  • improve written content, for example by applying style guidance or targeting certain audiences, or reformat existing content for new uses
  • generate alt text
  • improve our work and make it more efficient, for example by speeding up manual tasks

We will not publish text that has been written 100% by AI tools or text generators, unless there is a clear, relevant and stated reason for doing so (eg demonstrating the capabilities of a tool).

Output from generative AI is susceptible to bias, mistakes and misinformation, and should always be checked and edited appropriately. Communications professionals need a thorough and accurate understanding of the original material, and elements like quotations and facts and figures should be paid particular mind. Additionally, the tone and approach of AI-generated content may not be appropriate for our audiences; care should be taken to ensure that generated content is edited and improved based on the University’s style and tone of voice guidelines.
 

We may use AI tools to:

•    speed up editing, streamline tasks and generate ideas 
•    enhance image or video (eg upscaling, denoising, colour correction or lighting adjustments)
•    help with edits and corrections
•    support us in generating scripts and code
•    generate draft transcripts or subtitles

We will use AI tools ethically, particularly when images or videos involve people. 

Users must recognise that AI tools can reflect and reinforce societal biases. All outputs should be reviewed through an equity and inclusion lens to prevent harm and ensure fairness. Ethical use of AI includes avoiding misrepresentation, reinforcing harmful stereotypes or misleading audiences about provenance. For example, we will not edit the expression of a person, remove relevant context or attempt to display events in a different way.

We may publish images or videos fully generated by AI, but only when they meet a specific need that cannot be met by using existing photographs or illustrations, taking or commissioning new ones or using human-created stock images. We value the role of human creativity. We will always be transparent about our use of AI-generated images and will work to ensure copyright, IP and GDPR considerations are accounted for.

We will not use voice clone generators or create deepfake videos, which use AI tools to create ‘fake’ videos or audio – for example, face swapping or attributing fake speech to a real individual.

We may use AI to assist with communications-related analysis, research or reporting, such as analysis of data, content or other inputs, monitoring of content and sentiment, or automated reporting of trends.

We will not rely on AI summaries alone to understand research outputs or make judgments about their findings. As communications professionals, our understanding of the material is an essential part of our work.

We may use AI to help with website-related tasks, such as analysing or improving code.

We will always explore new uses and opportunities for using AI, but will evaluate them against our principles and guidelines.

We will ensure that sensitive, embargoed, internal and confidential information is handled with care.

Many generative AI tools allow you to paste content or upload images easily to assist with tasks. While useful, this can introduce risks to intellectual property, privacy and security when not used thoughtfully.

The terms of use on tools vary widely, and may allow use or access to inputs in ways that don’t fit the University’s needs or regulations. The InfoSec team has produced specific guidance around University-licensed ChatGPT Edu and Copilot, which can be used for confidential data but still require good data practices and care. You should not input any confidential or sensitive data into tools that haven’t been reviewed and cleared for internal or confidential data. A good rule of thumb is that these other tools should generally be used only for content which is in the public domain and which you wouldn’t be worried about being made public.

Tools can be evaluated for more extensive types of use. If you have another tool you’d like to investigate further, the University’s Info Sec teams have produced more in depth guidance on how to evaluate generative AI tools’ security; new tools may need to go through a Third Party Security Assessment (TPSA), and users will need to consider what sort of information is suitable for pasting or using with them. The same level of care should be applied to products with embedded AI tools such as Canva; check with InfoSec if you have any questions about what might apply. 

Each AI tool’s approach to copyright and IP may be different, and the legal framework varies by country. While users of the tools are generally protected in their use of output, they may wish to consider the ethical implications of the way in which any given tools have been trained, as well as thorny issues of ownership and copyright of AI-generated content in a fast-changing legal landscape.

It is also important to consider the environmental impact of AI use, as large-scale AI models usually require significant computational power, energy and water. Users should weigh the benefits of AI against the University’s sustainability commitments and consider limiting unnecessary generations or using lower-impact tools when possible. (More info: Generative AI’s environmental costs are soaring — and mostly secret.)

We are open with each other about the use of AI in our work, ensuring that team members and our wider stakeholders know what sorts of tools we use and how we use them.

We are transparent with our audiences about our use of AI, proactively addressing any concerns about authenticity, trust, and human oversight in our communications, including publishing these guidelines and using boilerplate labels where appropriate.

We foster a culture of experimentation and learning around AI, seeking to share information with each other about new opportunities.

Where a communicator feels that AI has played a key role in developing content (for example, by doing more than just assisting with edits), they may wish to use the following label. Placement would necessarily vary by tool, but the label should be visible to viewers.

This content was generated with the help of [insert tool here], and carefully reviewed by our team for accuracy and appropriateness.

Two generative AI tools are now available for University-wide use: ChatGPT Edu and Microsoft 365 Copilot. These licensed versions support research, teaching and admin while offering enhanced data protection and collaboration across Oxford.

Licences can be purchased through IT Services, with support and training provided by the AI and Machine Learning Competency Centre. Central licensing offers cost savings and enterprise-level protections, including safe use with personal or confidential data, and a secure alternative to the free or Plus/Teams versions. Please check with your local finance team or communications leads before purchasing, as local policies may apply.

Additional tools may be used, but staff should make themselves aware of the InfoSec guidance on evaluating and using GenAI tools, including what may and may not be allowed, and what information may be suitable to use with them.

Continuous learning and critical engagement with AI tools is essential to maintain the University’s communications excellence in a rapidly evolving digital environment.

There are a variety of resources available to communicators, both within and outside the University. 
Training is available via the AI and Machine Learning Competency Centre, including several ‘self-serve’ introductory options: https://staff.admin.ox.ac.uk/ai-and-ml-competency-centre-training#/

The University also offers three networks or special interest groups for those interested in AI:

Additional background and learning resources include:

About the guidelines

These guidelines were created by the University of Oxford’s Public Affairs Directorate for use by Oxford’s communications community of practice. 

We will review this guidance regularly and update as needed.

For more information or advice on these guidelines or related areas, please contact Liz McCarthy, Head of Campaigns and Digital Communications.

Contact us

For more information about digital communications, please contact the Digital Communications team in PAD:

digicomms@admin.ox.ac.uk