Guidelines on the use of generative AI

Last updated: 20 February 2024

This guidance, and our approach to refining it, is currently in a pilot phase until summer 2024. Suggestions and feedback are very welcome to the Head of Digital Campaigns and Communications.

We know that AI tools are already changing the way we work at Oxford, whether that’s improving the quality of our written communications or helping speed up our research tasks. But it is important that we work together to ensure that their use aligns with our values and our obligations. 

These guidelines outline expectations for the use of generative AI tools by communicators at Oxford. Technology and approaches change rapidly; we will aim to test and review these on a regular basis and update as necessary. Our aim is not to be restrictive, but rather to ensure those using these tools feel confident in the boundaries of what is acceptable in our community.

It may be helpful to understand what is and isn’t AI, and how it works. A few helpful guides:

The guidelines can be read alongside the Russell Group principles on the use of generative AI tools in education, which support the development and use of AI in a way that enables their ethical use and upholds academic rigour and integrity.

Summary

Our guidelines can be summarised by the following principles:

  1. We will prioritise human creativity, curiosity and judgement.
  2. Oxford’s reputation stands on the trustworthiness of our research and our communications. We will be transparent with each other and with our audiences about our use of AI and ensure that AI is not used to conceal or alter original intent or meaning.
  3. As communications professionals, we are responsible for the content we produce. The use of any AI tool should therefore be seen as a supportive mechanism to create value and enable productivity, with the recognition that output from generative AI is susceptible to bias, mistakes and misinformation. Content should always be checked and edited appropriately.

In general, we use AI as a starting tool rather than a way to generate a final product; AI should never be ‘the author’ of anything we publish.

We may use AI tools to help us:

  • research a topic
  • generate ideas
  • work on drafts 
  • improve our work and make it more efficient, for example by speeding up manual tasks

We will not publish text that has been written 100% by AI tools or text generators, unless there is a clear, relevant and stated reason for doing so (eg demonstrating the capabilities of a tool).

Output from generative AI is susceptible to bias, mistakes and misinformation, and should always be checked and edited appropriately. Additionally, its tone and approach may not be appropriate for our audiences; care should be taken to ensure that generated content is edited and improved based on the University’s style and tone of voice guidelines.

We may use AI to speed up editing, streamline tasks and generate ideas, but we will not use AI to change the meaning or context of existing images. 

We may use AI tools to help with edits and corrections.

We will use AI tools ethically, particularly when images or videos involve people. 

This means that we will take care not to change the original intent or context of an image or video. For example, we will not edit the expression of a person, or remove context.

We may publish images or videos fully generated by AI, but only when they meet a specific need that cannot be met by using existing photographs, taking new ones or using stock photography. We will always be transparent about our use of AI-generated images and will ensure copyright and GDPR considerations are accounted for.

We will not use voice clone generators or create deepfake videos, which use AI tools to create ‘fake’ videos or audio – for example, face swapping or attributing fake speech to a real individual.

We may use AI to assist with communications-related analysis and research, such as analysis of data, content or other inputs.

We may use AI to help with website-related tasks, such as analysing or improving code.

We will always explore new tools and opportunities for using AI, but will evaluate them against our principles and internal security guidelines.

We will ensure that sensitive, embargoed, internal and confidential information is handled with care.

Many generative AI tools allow you to paste content or upload images easily to assist with tasks. While useful, this can introduce risks to intellectual property, privacy and security when not used thoughtfully.

The terms of use on tools vary widely, and may allow use or access to inputs in ways that don’t fit the University’s needs or regulations. You should not generally input any confidential or sensitive data into a tool. A good rule of thumb is that tools should generally be used for content which is in the public domain or which you wouldn’t be worried about being made public.

Tools can be evaluated for more extensive types of use. OpenAI (ChatGPT) has been through the University’s Third Party Security Assessment process, and is suitable for use with information/data classified up to internal. If you have another tool you’d like to investigate further, the University’s Info Sec teams have produced more in depth guidance on how to evaluate generative AI tools’ security, and what sort of information is suitable for pasting or using with them.

We will be open with each other about the use of AI in our work, ensuring that team members and our wider stakeholders know what sorts of tools we use and how we might use them.

We will be open with our audiences about the use of AI in our work, including publishing these guidelines and using boilerplate labels where appropriate.

We will foster a culture of experimentation and learning around AI, seeking to share information with each other about new opportunities.

Where a communicator feels that AI has played a key role in developing content (for example, by doing more than just assisting with edits), they may wish to use the following label. Placement would necessarily vary by tool, but the label should be visible to viewers.

This content was generated with the help of [insert tool here], and carefully reviewed by our team for accuracy and appropriateness.

There are a variety of resources available to communicators, both within and outside the University. 

Within Oxford:

External:

About the guidelines

These guidelines were created for the University of Oxford’s Public Affairs Directorate, as well as for use by Oxford’s communications community.

We will review this guidance regularly and update as needed.

For more information or advice on these guidelines or related areas, please contact Liz McCarthy, Head of Campaigns and Digital Communications.

Contact us

For more information about digital communications, please contact the Digital Communications team in PAD:

digicomms@admin.ox.ac.uk