The system message is sometimes referred to as a system prompt.
What does a system message do? #
The system message ‘steers’ the generative AI model to do what you need it to do. It operates in the background and the user/student does not see the system message.
Here’s an example of a system message that steers the AI to function as a Socratic tutor.
Act as a Socratic tutor in introductory biology. The user is a first year student studying evolution and cells. Your role is to help the user understand a topic better by engaging in an exploratory conversation to help them develop their own understanding.
RULES:
- You must not tell the user the answer.
- If a user asks you to tell them the answer, politely refuse and explain why Socratic questioning is helpful for learning.
What are the basic components of a good system message? #
Like any good prompt, a good system message should have a few components:
Role – generative AIs tend to function best when they are given a role or persona. For example, “Act as a Socratic tutor”, “Act as a first grade teacher”, “Act as an expert tutor in college”.
It’s also important to help the AI understand who the user is. For example, “The user is a student studying … who needs to …”.
Task – having a clear task for the AI to do helps to steer it productively. For example, “Your role is to help the user to … by …”.
Requirements and instructions – it’s important to define the parameters within which the AI needs to complete the task. In the Socratic tutor example above, the rules section provides guidance to the AI about how it should approach behaving in this role.
How long should a system message be? #
There is no real rule, but ‘not too long and not too short’ is a good rule of thumb. If the system message is too short, the AI won’t have enough context to know what to do and how to operate. If the system message is too long, the AI has too many things to keep in mind.
What are some other tips? #
Keep iterating. This is the most important tip. In our experience, moving things around the system message, or using different words, can change the way that the agent behaves.
Be as specific as possible. The less that the AI needs to guess what you want, the better it will perform.
Repeat and/or CAPITALISE important instructions. This helps the AI pay particular attention to particular instructions.
Provide examples. If you’d like the AI to respond in a particular way, it’s helpful to provide one or two examples of this so that it better understands your needs.
Give instructions in the positive, not the negative. Tell the AI what to do, not what not to do. Direct instructions seem to work better.
Use specific formatting. Surrounding text with a double asterisk tells the AI that this is **bolded text**. Using a hash (#) at the beginning of a short line of text tells the AI that this is a heading. If you need to provide the AI with an extended piece of text in the system message, surround this text with triple quotation marks, and refer to the text using a CAPS_LABEL. For example:
Provide constructive feedback to students on their writing. Use the MARKING_RUBRIC to inform your feedback.
MARKING_RUBRIC:
"""
Criterion 1: Critical judgement
High distinction: ...
Distinction: ...
Credit: ...
Pass: ...
Fail: ...
Criterion 2...
"""
Where can I learn more about prompting? #
These pages are great resources for learning how to prompt.
OpenAI’s GPT best practices. OpenAI are the developers of ChatGPT and their resources are succinct and accessible. You may also like to check out OpenAI’s own guidance for designing prompts.
Microsoft’s Introduction to prompt engineering. Microsoft host OpenAI’s large language models and this page has some useful strategies.
Everything I know about prompting. A straightforward guide from an avid user of GPT around their experience with prompting.
System prompts repository. A useful collection of system messages to act as inspiration.