Glossary

Foundation Model

A foundation model is a large AI model trained on broad data at massive scale, designed to be adapted to a wide range of downstream tasks rather than built for any single one. GPT-4, Claude, Gemini, Llama, and Stable Diffusion are all foundation models. The term was coined by Stanford's Center for Research on Foundation Models in 2021 to capture a shift in how AI gets built: instead of training a new model for each task, you train one general model and specialize it. This is efficient but concentrates power. A handful of foundation model providers—OpenAI, Anthropic, Google, Meta—set the capabilities and limitations that millions of applications inherit. For enterprises, the practical question is not which foundation model is best in abstract benchmarks but which one is best for your specific tasks, data constraints, and risk tolerance.

Related terms:

Hallucination

Hallucination occurs when a language model generates text that sounds confident and plausible but is factually incorrect, such as invented citations or fabricated statistics. It stems from LLMs being pattern-completion engines rather than databases.

Zero-Shot Prompting

Zero-shot prompting is the most basic form of AI interaction where questions are posed without any examples or guidance, relying entirely on the model’s pre-trained knowledge. This baseline approach immediately tests raw capabilities, revealing both its breadth and limitations.

AI Agent

An AI agent is a system that autonomously breaks a goal into steps—calling tools, reading results, and adjusting course—without waiting for a human prompt. While powerful for tasks with clear success criteria, agents can be dangerous when goals are vague or environments unfamiliar and typically need tight guardrails in production.