Hallucination
Hallucination is when a language model generates text that sounds confident and plausible but is factually wrong—invented citations, fabricated statistics, nonexistent API endpoints. It happens because LLMs are not databases. They are pattern-completion engines that predict likely next tokens, and sometimes the likeliest continuation is a fluent lie. Hallucination rates vary by model, task, and domain: open-ended creative writing has different tolerances than legal research. Mitigation strategies include retrieval-augmented generation (grounding responses in source documents), chain-of-thought prompting (forcing the model to show its reasoning), and structured output validation. None of these eliminate hallucination entirely. Any system where an LLM's output reaches a customer, a contract, or a database without human review or automated verification is a system waiting to embarrass you.
Related terms:
Conway's Law
Conway’s Law states that organizations designing systems are constrained to produce designs mirroring their own communication structures.
Context Window
A context window is the maximum amount of text a language model can process in a single call—input and output combined—measured in tokens.
WWGPTD
WWGPTD began as internal Slack shorthand to remind teams that using AI isn’t cheating but the essential first step. It reframes strategy by asking how AI.