System Prompt
A system prompt is the set of instructions given to a language model before the user's input—defining the model's persona, constraints, output format, and behavioral rules. In the OpenAI and Anthropic APIs, it occupies the "system" role in the message array. The user never sees it, but it shapes everything the model does. System prompts are where enterprises encode their business logic: "You are a customer support agent for Acme Corp. You may only reference information from the provided knowledge base. Never discuss competitor pricing. Always respond in the customer's language." A well-written system prompt is the cheapest, fastest way to control model behavior. A poorly written one is why your AI chatbot told a customer they could get a full refund on a non-refundable ticket. System prompt engineering is a core skill that most teams underinvest in.
Related terms:
Conway's Law
Conway’s Law states that organizations designing systems are constrained to produce designs mirroring their own communication structures. For example, separate sales, marketing, and support teams often yield a website organized into Shop, Learn, and Support sections—reflecting internal divisions rather than user needs.
RAG (Retrieval-Augmented Generation)
Retrieval-augmented generation (RAG) links a large language model to external documents, databases, and APIs so it retrieves relevant, up-to-date context at query time and generates answers grounded in real evidence rather than relying solely on its training data.
AI Evaluation
AI evaluation is the practice of systematically measuring an AI system’s performance against defined criteria—accuracy, latency, cost, safety, and user satisfaction—using representative test datasets, business-outcome metrics, and automated pipelines before and after deployment. Without it, organizations risk flying blind, mistaking demo success for reliable production performance.