A system prompt is the hidden instruction that defines an AI chatbot's persona, rules, and constraints. Learn what to include, how to write one, and common pitfalls.
More about System Prompt
A system prompt is the instruction placed at the top of every request to a large language model that defines how the model should behave. It is not something the end user sees. It tells the model who it is, what it can and cannot do, the tone it should use, and the business rules it has to follow.
For anyone building an AI assistant, the system prompt is the single most important piece of plain-text configuration. A good one transforms a generic model into a coherent, branded, trustworthy agent. A bad one produces a bot that contradicts itself, wanders off-topic, or answers questions it should have refused.
What a System Prompt Usually Contains
A production-grade system prompt typically covers:
- Identity: who the bot is and which company it represents.
- Scope: what topics it should and should not discuss.
- Tone and style: formal, friendly, concise, technical, and so on.
- Output format: bullet points, sentence length, language, whether to include citations.
- Safety rules: what to refuse, how to handle sensitive topics, when to escalate.
- Tool use rules: when to call APIs via function calling versus answering from memory.
- Fallback behaviour: what to do when the answer is not in the provided context, to reduce AI hallucination.
The system prompt is reloaded on every request and occupies a chunk of the context window, so it needs to be tight. Aim for clarity over cleverness.
System Prompt vs. User Prompt
The distinction is structural:
- The system prompt is authored by the developer or platform and represents policy.
- The user prompt is the live input from the person the bot is chatting with.
Good LLM providers treat the two differently, weighting the system prompt more heavily and biasing the model to honour its instructions even when a user tries to override them. This is why prompt injection, where a user tries to talk the bot into ignoring its system prompt, remains a live research area. See prompt injection for more detail.
Why System Prompts Matter for Chatbots
Consider two support bots backed by the same underlying model. One has a system prompt that says "you are a helpful customer support agent for Acme Corp. Answer only from the provided knowledge base. If unsure, say so and offer to connect the user to a human." The other has "you are a helpful assistant." The first bot will stay in character, pass back to a human when needed, and avoid making things up. The second bot will cheerfully answer questions about Acme's competitors, its stock price, and tax law.
The system prompt is the difference between a product and a prototype.
SiteSpeak exposes the system prompt as a configurable text field per chatbot, so teams can write the persona, scope, and escalation rules directly without touching any code. The assistant then enforces those rules on every message using the same underlying model.
Writing an Effective System Prompt
A few practical guidelines:
- Lead with identity and scope. Models follow the earliest instructions most reliably.
- Be explicit about failure modes. Tell the model what to do when it does not know the answer.
- Prefer examples to abstractions. Two short demonstrations often beat a paragraph of rules. This overlaps with few-shot learning techniques.
- Use simple, direct language. Flowery instructions confuse the model.
- Avoid contradictory rules. If two rules can conflict, decide the precedence in the prompt.
- Test with adversarial inputs. Try to break your own prompt before a user does.
Common Pitfalls
Patterns that silently hurt system prompts:
- Kitchen-sink prompts: every rule the team ever wanted, stacked in one giant prompt. The model loses track and the context window bill grows.
- Conflicting tone instructions: "be friendly but formal but brief but thorough". Pick two.
- Vague refusals: "do not discuss inappropriate topics" is easy to debate around. Concrete lists work better.
- No versioning: when the system prompt changes and behaviour regresses, you need to know what changed and when.
Teams that treat the system prompt like real production code, versioned, tested, and reviewed, get far more reliable chatbots than teams that edit it on a whim.