Glossary

Hallucination

Hallucination is when a language model generates text that sounds confident and plausible but is factually wrong—invented citations, fabricated statistics, nonexistent API endpoints. It happens because LLMs are not databases. They are pattern-completion engines that predict likely next tokens, and sometimes the likeliest continuation is a fluent lie. Hallucination rates vary by model, task, and domain: open-ended creative writing has different tolerances than legal research. Mitigation strategies include retrieval-augmented generation (grounding responses in source documents), chain-of-thought prompting (forcing the model to show its reasoning), and structured output validation. None of these eliminate hallucination entirely. Any system where an LLM's output reaches a customer, a contract, or a database without human review or automated verification is a system waiting to embarrass you.

Related terms:

Agentic AI

Agentic AI refers to systems that autonomously pursue goals—planning actions, employing tools, and adapting based on feedback—without waiting for human instructions at every step. Unlike passive AI that only responds when prompted, agentic AI can monitor systems, diagnose issues, and propose fixes on its own.

AI Agent

An AI agent is a system that autonomously breaks a goal into steps—calling tools, reading results, and adjusting course—without waiting for a human prompt. While powerful for tasks with clear success criteria, agents can be dangerous when goals are vague or environments unfamiliar and typically need tight guardrails in production.

Structured Output

Structured output occurs when a language model returns data in predictable, machine-readable formats—such as JSON, XML, or typed objects—rather than free-form prose, enabling software systems to reliably parse fields like names, dates, and dollar amounts. By using constrained generation to enforce a JSON schema, structured output transforms AI from a conversational interface into a dependable system component.