Glossary

AI Governance

AI governance is the set of policies, processes, and technical controls an organization uses to manage the risks of deploying AI systems. This includes deciding which use cases are appropriate for AI, how models are evaluated before deployment, who is accountable when they fail, and how you handle data privacy, bias, and regulatory compliance. The EU AI Act, which took effect in 2024, made governance a legal requirement for companies operating in Europe—classifying AI systems by risk level and imposing obligations that scale accordingly. But governance is not just a compliance exercise. Organizations without clear AI governance end up with shadow AI: employees using ChatGPT to draft contracts, analyze customer data, or make recommendations with no oversight, no audit trail, and no idea what the model was trained on. Governance is how you use AI aggressively without using it recklessly.

Related terms:

Structured Output

Structured output occurs when a language model returns data in predictable, machine-readable formats—such as JSON, XML, or typed objects—rather than free-form prose, enabling software systems to reliably parse fields like names, dates, and dollar amounts. By using constrained generation to enforce a JSON schema, structured output transforms AI from a conversational interface into a dependable system component.

Hallucination

Hallucination occurs when a language model generates text that sounds confident and plausible but is factually incorrect, such as invented citations or fabricated statistics. It stems from LLMs being pattern-completion engines rather than databases.

Chain-of-Thought

Chain-of-thought prompting, introduced by Google Research in 2022, transforms AI from an answer machine into a reasoning partner by explicitly modeling the problem-solving process step by step. By decomposing complex queries into sequential reasoning steps and making implicit thinking explicit, it fundamentally improves AI performance.