How much do you know about generative AI?

It’s been almost a year since ChatGPT was released, and it’s been a hot topic in Higher Education ever since. Members of TILE have been working with colleagues across the University to make sure staff and students know how to use tools like ChatGPT both effectively and ethically.

Developments in the field of generative AI have been breathtakingly fast this year, so you’d be forgiven for feeling lost and confused. Here’s a short summary of the key facts and issues concerning generative AI in Higher Education, followed by some practical tips for teaching staff.

What is Generative AI?

Generative AI refers to artificial intelligence (AI) systems that can generate new content, such as text, images, audio and video, based on data they have been trained on. The most well-known examples are large language models like GPT-3 (ChatGPT), DALL-E for image generation, and emerging tools that generate audio or video.

These systems are trained on vast datasets of text, images or other media to recognise patterns and relationships between elements. They can then generate brand new outputs that are high quality and often remarkably human-like. The possibilities for using generative AI to create content, art, media and more seem endless. But as with any powerful technology, there are important ethical considerations around its use.

The ethical implications for Higher Education

Generative AI can be a hugely powerful tool in education – helping both staff and students work more efficiently and effectively. It can also be argued that because of their increasing presence in the workplace, students need to be taught how to use generative AI tools to prepare them for employment. However, generative AI introduces new ethical dilemmas for universities that need to be properly considered:

Authenticity – If students use generative tools to produce written work, how can instructors determine what is the student’s original work versus what is AI-generated? So far, no “AI detection” tools have been found to be reliable, and they can also give false positives on (for example) students writing in their second language.

Bias – Current generative AI models reflect biases in their training data. Both written and image-based outputs can perpetuate harmful stereotypes around gender, race, ability and more.

Accessibility – Whilst generative AI tools (translation apps, grammar checkers etc.) can improve accessibility for some, there is an unequal distribution of resources (premium AI tools come at a price) and skills (e.g. expertise in prompt engineering), which can make the tools themselves inaccessible.

Impact on skills – If generative AI makes producing content easier, will this reduce students’ motivation to build creative and critical thinking skills themselves?

Intellectual property – Who owns the output generated by AI systems – the company that created it, the user or no one? Important questions around copyright and plagiarism arise.

What can we do about it?

If you’re curious or concerned about generative AI, here are some practical tips.

Learn more about what generative AI can do

To learn more about generative AI, a good place to start is Jisc’s AI Primer or the UNESCO Quick Start Guide. Staffordshire University staff can book on to one of the training sessions hosted by the Library through this Access Link. You can also check out our AI guidance for students via the Library and Learning Services website.

You can try out some of the popular text-generation tools by registering for a free account with ChatGPT, Google Bard or Claude, or use Bing Chat in a Microsoft Edge browser.

Talk to your team about how gen AI affects your work 

How and whether generative AI should be used in your work and teaching can be quite context-dependent. Having a discussion with your immediate colleagues about gen AI can help you come up with a coherent and consistent approach, and it’s a good way to share knowledge and resources too. You can register your interest in becoming an “AI Liaison” for your team – that is, someone who initiates these kind of discussions with their colleagues – by filling in this form.

As a result of these discussions, you might identify some potential changes to your modules or assessments, to incorporate teaching gen AI skills, or make your assignments more AI-proof. You can get support with this from your School’s TILE Hub and Academic Development team contacts.

Communicate clear expectations to students

The term “Generative AI” encompasses a wide variety of tools and uses, and it’s understandable that some students might be unsure as to what things are acceptable to use in assignments.  For example, copying an entire AI-generated essay and passing it off as your own is almost certainly inappropriate in most cases, but what about using editing tools such as Grammarly or Quillbot to reword something you’ve written?

Think about what students need to do to show they’ve met your learning outcomes, and where the use of generative AI might disrupt that, and set some clear boundaries and expectations accordingly. For example, you could ask students to disclose their use of gen AI when submitting an assignment, or to reference generated content as set out on Cite Them Right Online. For more general advice on academic integrity, point students towards the Library’s AI guide.

Lead by example 

It’s not just students who use generative AI – there are an increasing number of tools to help educators plan, teach and even mark, and the ethical implications of using them still apply.

It might decrease your workload to let AI generate summaries of course texts, or to upload students’ essays for some AI-generated feedback, but both of these uses involve sharing other people’s work with a commercial black-box system without their knowledge or consent.

It can help to be transparent to students about how and where you’ve used generative AI, and model good practice. For example, most of the first half of this blog was written by entering the following prompt in the claude.ai tool:

“Write a 500 word tech blog article aimed at university instructors in the UK, explaining what generative AI is and the ethical issues of its use in higher education.”

I then edited the output to change any US English spellings, add in extra points I wanted to make, and remove some sentences that seemed overly general or corporate in tone. Could you tell? Would that be acceptable in an assessment?

Follow the established procedure if you’re concerned about misconduct

Although generative AI is new, academic misconduct is not, and the rules surrounding falsely passing work off as your own still apply. Where you suspect academic misconduct of any form, you should follow the procedure and guidance outlined in the Staffordshire University academic conduct policy.

Leave a Reply

Your email address will not be published. Required fields are marked *