Glossary

AI Agent

An AI agent is a system that takes a goal, breaks it into steps, and executes those steps autonomously—calling tools, reading results, adjusting course—without a human approving each action. Where a chatbot waits for your next prompt, an agent decides what to do next on its own. The concept borrows from reinforcement learning and robotics, but the current wave runs on large language models that can reason about which tool to use when. Agents are powerful when the task has clear success criteria and bounded risk. They are dangerous when the goal is vague, the environment is unfamiliar, or the cost of a wrong action is high. Most production agent systems today still need tight guardrails and human checkpoints—full autonomy remains more demo than reality.

Related terms:

System Prompt

A system prompt is an invisible set of instructions given to a language model—defining its persona, constraints, output format, and behavioral rules—and occupies the “system” role in APIs like OpenAI and Anthropic. It shapes every response by encoding business logic and is the most efficient way to control model behavior.

Agentic AI

Agentic AI refers to systems that autonomously pursue goals—planning actions, employing tools, and adapting based on feedback—without waiting for human instructions at every step. Unlike passive AI that only responds when prompted, agentic AI can monitor systems, diagnose issues, and propose fixes on its own.

Structured Output

Structured output occurs when a language model returns data in predictable, machine-readable formats—such as JSON, XML, or typed objects—rather than free-form prose, enabling software systems to reliably parse fields like names, dates, and dollar amounts. By using constrained generation to enforce a JSON schema, structured output transforms AI from a conversational interface into a dependable system component.