AI Chatbot Terms > 4 min read

System Prompt: How It Shapes an AI Chatbot's Behaviour

A system prompt is the hidden instruction that defines an AI chatbot's persona, rules, and constraints. Learn what to include, how to write one, and common pitfalls.

More about System Prompt

A system prompt is the instruction placed at the top of every request to a large language model that defines how the model should behave. It is not something the end user sees. It tells the model who it is, what it can and cannot do, the tone it should use, and the business rules it has to follow.

For anyone building an AI assistant, the system prompt is the single most important piece of plain-text configuration. A good one transforms a generic model into a coherent, branded, trustworthy agent. A bad one produces a bot that contradicts itself, wanders off-topic, or answers questions it should have refused.

What a System Prompt Usually Contains

A production-grade system prompt typically covers:

  • Identity: who the bot is and which company it represents.
  • Scope: what topics it should and should not discuss.
  • Tone and style: formal, friendly, concise, technical, and so on.
  • Output format: bullet points, sentence length, language, whether to include citations.
  • Safety rules: what to refuse, how to handle sensitive topics, when to escalate.
  • Tool use rules: when to call APIs via function calling versus answering from memory.
  • Fallback behaviour: what to do when the answer is not in the provided context, to reduce AI hallucination.

The system prompt is reloaded on every request and occupies a chunk of the context window, so it needs to be tight. Aim for clarity over cleverness.

System Prompt vs. User Prompt

The distinction is structural:

  • The system prompt is authored by the developer or platform and represents policy.
  • The user prompt is the live input from the person the bot is chatting with.

Good LLM providers treat the two differently, weighting the system prompt more heavily and biasing the model to honour its instructions even when a user tries to override them. This is why prompt injection, where a user tries to talk the bot into ignoring its system prompt, remains a live research area. See prompt injection for more detail.

Why System Prompts Matter for Chatbots

Consider two support bots backed by the same underlying model. One has a system prompt that says "you are a helpful customer support agent for Acme Corp. Answer only from the provided knowledge base. If unsure, say so and offer to connect the user to a human." The other has "you are a helpful assistant." The first bot will stay in character, pass back to a human when needed, and avoid making things up. The second bot will cheerfully answer questions about Acme's competitors, its stock price, and tax law.

The system prompt is the difference between a product and a prototype.

SiteSpeak exposes the system prompt as a configurable text field per chatbot, so teams can write the persona, scope, and escalation rules directly without touching any code. The assistant then enforces those rules on every message using the same underlying model.

Writing an Effective System Prompt

A few practical guidelines:

  • Lead with identity and scope. Models follow the earliest instructions most reliably.
  • Be explicit about failure modes. Tell the model what to do when it does not know the answer.
  • Prefer examples to abstractions. Two short demonstrations often beat a paragraph of rules. This overlaps with few-shot learning techniques.
  • Use simple, direct language. Flowery instructions confuse the model.
  • Avoid contradictory rules. If two rules can conflict, decide the precedence in the prompt.
  • Test with adversarial inputs. Try to break your own prompt before a user does.

Common Pitfalls

Patterns that silently hurt system prompts:

  • Kitchen-sink prompts: every rule the team ever wanted, stacked in one giant prompt. The model loses track and the context window bill grows.
  • Conflicting tone instructions: "be friendly but formal but brief but thorough". Pick two.
  • Vague refusals: "do not discuss inappropriate topics" is easy to debate around. Concrete lists work better.
  • No versioning: when the system prompt changes and behaviour regresses, you need to know what changed and when.

Teams that treat the system prompt like real production code, versioned, tested, and reviewed, get far more reliable chatbots than teams that edit it on a whim.

Frequently Asked Questions

The system prompt is the instruction set written by the developer or platform that defines the bot's identity, tone, and rules. The user prompt is whatever the customer types on the current turn. Most LLM APIs treat the system prompt as higher priority and reload it with every request, so it influences every response the bot gives.

Short enough to stay tight, long enough to cover identity, scope, tone, safety, and fallback behaviour. Most production system prompts land between 200 and 800 tokens. Longer prompts waste context window space and often dilute the most important rules. If yours keeps growing, that is a signal to split policies out into separate modular prompts or to fine-tune the model.

Users do not see the system prompt in well-built chatbots, but determined users can sometimes coax it out or get the bot to ignore its rules through prompt injection. Modern models are more robust to this than early ones, but it is still a live risk. The best defences are input sanitisation, clear precedence rules in the prompt, and output filtering before the reply reaches the user.

Share this article:
Copied!

Ready to automate your customer service with AI?

Join over 1000+ businesses, websites and startups automating their customer service and other tasks with a custom trained AI agent.

Create Your AI Agent No credit card required