ChatGPT has been on our radar for just over three months, and text to image AIs like Midjourney and Stable Diffusion a little longer. In that time, we’ve come rapidly to understand the importance of “the prompt”: that strangely crafted, almost magical sentence or short paragraph that the AI reads and, through some sort of digital alchemy, outputs an essay, short story, business plan or fantastical image in moments.
My first experience of ChatGPT was mixed. Like the majority of new users I wanted to see how good it was writing “out of the box”. I asked it to write an essay on the role of the witches in Macbeth, and what it produced was quite impressive. It was somewhat pedestrian in its phrasing but was grammatically perfect and ticked all the boxes for a decent grade.
However, on repeating that simple initial prompt with variations, I noticed that the output had a certain flavour to it. The writing was clear, ordered, but essentially dull. There was no personality behind it. The more I played with prompting, the more I began to understand that our initial fear of students using these models to plagiarise is not as worrisome as we might have initially thought.
“We should be focusing our energies on how we input into these models through prompts.”
Because we tend to learn our students’ written style fast, and there are other ways of assessing learning, we should not get too hung up on the essay cheating angle. We should instead be focusing our energies on how we input into these models through prompts, as this fundamentally impacts the models’ output. We need to teach ourselves, and our students, how to do this well.
This is why I have begun a programme to bring the British International School of Tunis up to speed with AI, and in particular ChatGPT. Staff have had introductory INSET, and students demos in assembly.
A task force has been formed and we are using Slack to share thoughts and move the debate forward as teachers begin to tentatively trial ideas in class. The first task will be to explore how the introduction of AIs like ChatGPT into school will move all teachers from simply being teachers of their subject to teachers of a new Literacy for AI, which I will henceforth abbreviate as LfAI.
If the Oxford definition of literacy is “the ability to read and write” or “competence or knowledge in a specified area”, then LfAI centres around the ability to write prompts in such a way as to generate the visual, audio or textual outputs originally envisioned. Writing a prompt for whatever the output is a skill we are only just beginning to understand.
“Think of prompt engineering like training a dog.”
What we are already learning is that prompting is not like any literacy we have used before. We can input a simple prompt like “Write a scheme of work on Macbeth”, but if we want the AI to output what we had originally conceived we need to do more. We need to learn how to pre-train the AI model so that our output is optimised. This is the art of prompt engineering.
Think of prompt engineering like training a dog (which is how Open AI describes it). You want a dog to come to heal, so you start by taking small cubes of cheese out with you on a walk. The dog soon associates you with tasty cheese, so that, even when you stop handing out cheese, the dog still comes to heal when requested. If you have an unruly dog you should try it.
With ChatGPT, the same rule applies. You want it to respond like a teacher. You therefore begin the prompt “Take the role of a skilled, experienced and creative English teacher”. You then say what you will do, and then what it needs to do in response.
For example: “I will give you a set of scheme of work criteria. You will plan out a detailed and logical scheme of work, to include relevant written and oral assessments.” You then give it the scheme of work criteria, such as syllabus, age group, number of weeks, and topic. It will base its output both on the task you give it and how you wish it to respond.
“Whether it’s in English, art, computer science or music, the alchemy of prompt engineering will play a critical role.”
But that’s only half the prompt engineering story. It becomes more complex when moving into text to image models. Take this as a typical text prompt for the Midjourney text to image generator: car:: red:1.5:: small:0.5:: paris:: summer.
What is going on here? It is giving each part of the image different weighting: the red will have 50 per cent more weighting (or importance) than other parts of the image, and the size of the car will have 50 per cent less importance. Here is the result when this exact text prompt is inputted into Midjourney:
Compare this to the result when some of the descriptors and numbers are removed, writing instead: “car:: paris:: summer”:
Whilst you don’t at this stage need to understand the exact process behind how these remarkable (and entirely original) artworks are created through words alone, what it does demonstrate is if we want our students to be able to make the most of these new tools, we must teach these new literacies well.
I would urge schools to invest time and resource in creating their own LfAI curriculum, as every teacher will soon need to see themselves as an LfAI teacher. Whether it’s in English, art, computer science or music, the alchemy of prompt engineering will play a critical role as models like ChatGPT and Midjourney become further embedded in our lives.