Prompt Engineering
Prompt engineering is the practice of crafting inputs to a language model so it produces the output you actually want. This ranges from simple instruction-writing to elaborate system prompts with examples, constraints, personas, and chain-of-thought scaffolding. The term sounds trivial—just ask it better—but the gap between a naive prompt and a well-engineered one can be the difference between a useless response and a production-grade output. Prompt engineering is also the most accessible lever for improving AI performance: it requires no training data, no GPUs, and no machine learning expertise. The tradeoff is fragility. Prompts that work on one model version may break on the next. They are hard to version-control, hard to test systematically, and easy to overfit to a handful of examples.
Referenced in these posts:
Things I Think I Think About AI
Noah distills his 2,400+ hours of AI use into a candid, unordered list of 29 controversial takeaways—from championing ChatGPT’s advanced models and token...
Related terms:
Token
In large language models, a token is the basic unit of text—usually chunks of three to four characters—that the model reads and generates.
Transformer
The transformer is the neural network architecture introduced in Vaswani et al.’s “Attention Is All You Need” that replaces recurrence with parallel...
Fine-Tuning
Fine-tuning continues training a pretrained language model on a smaller, task-specific dataset so it internalizes particular behaviors, styles, or domain...