System Prompt
A system prompt is the set of instructions given to a language model before the user's input—defining the model's persona, constraints, output format, and behavioral rules. In the OpenAI and Anthropic APIs, it occupies the "system" role in the message array. The user never sees it, but it shapes everything the model does. System prompts are where enterprises encode their business logic: "You are a customer support agent for Acme Corp. You may only reference information from the provided knowledge base. Never discuss competitor pricing. Always respond in the customer's language." A well-written system prompt is the cheapest, fastest way to control model behavior. A poorly written one is why your AI chatbot told a customer they could get a full refund on a non-refundable ticket. System prompt engineering is a core skill that most teams underinvest in.
Related terms:
Generative AI
Generative AI refers to AI systems that learn statistical patterns from training data to create new content—such as text, images, code, audio, or video—rather than classifying or analyzing existing data. This marks a shift from earlier discriminative models like spam filters and recommendation engines, with tools like ChatGPT, DALL-E, Midjourney, and Stable Diffusion driving its rapid mainstream adoption.
Context Window
A context window is the maximum amount of text a language model can process in a single call—input and output combined—measured in tokens. Larger windows (from about 4,000 tokens up to over a million) let you handle longer inputs but raise costs and can suffer from the “lost in the middle” attention issue.
AI Evaluation
AI evaluation is the practice of systematically measuring an AI system’s performance against defined criteria—accuracy, latency, cost, safety, and user satisfaction—using representative test datasets, business-outcome metrics, and automated pipelines before and after deployment. Without it, organizations risk flying blind, mistaking demo success for reliable production performance.