Glossary

Structured Output

Structured output is when a language model returns data in a predictable, machine-readable format—JSON, XML, typed objects—rather than free-form prose. This is what makes LLMs usable as components in software systems rather than just conversational interfaces. If you need the model to extract a name, date, and dollar amount from an invoice, you need those values in fields your code can parse, not embedded in a sentence. Most model providers now support constrained generation—forcing the model's output to conform to a JSON schema—which eliminates the parsing failures that plagued early integrations. OpenAI's structured output mode, Anthropic's tool use, and open-source libraries like Instructor all solve this problem. Structured output is the bridge between AI as a chat feature and AI as a system component, and getting it right is prerequisite to any serious automation.

Related terms:

LLM (Large Language Model)

A large language model is a neural network with billions of parameters trained on massive text corpora to predict the next word in a sequence, powering tasks from coding and summarization to translation and conversation. Though general-purpose by default, LLMs require prompting, fine-tuning, or data integration to excel at specific tasks.

Temperature

Temperature is a parameter controlling a language model’s randomness: at 0 it always picks the most probable next token for deterministic, reliable output, at 1 it samples more broadly for varied, creative results, and above 1 it becomes increasingly random. Choosing the right temperature (e.g., 0 for consistent data extraction or 0.7–0.9 for brainstorming) balances reliability and diversity.

Conway's Law

Conway’s Law states that organizations designing systems are constrained to produce designs mirroring their own communication structures. For example, separate sales, marketing, and support teams often yield a website organized into Shop, Learn, and Support sections—reflecting internal divisions rather than user needs.