Foundation Model
A foundation model is a large AI model trained on broad data at massive scale, designed to be adapted to a wide range of downstream tasks rather than built for any single one. GPT-4, Claude, Gemini, Llama, and Stable Diffusion are all foundation models. The term was coined by Stanford's Center for Research on Foundation Models in 2021 to capture a shift in how AI gets built: instead of training a new model for each task, you train one general model and specialize it. This is efficient but concentrates power. A handful of foundation model providers—OpenAI, Anthropic, Google, Meta—set the capabilities and limitations that millions of applications inherit. For enterprises, the practical question is not which foundation model is best in abstract benchmarks but which one is best for your specific tasks, data constraints, and risk tolerance.
Related terms:
Structured Output
Structured output occurs when a language model returns data in predictable, machine-readable formats—such as JSON, XML, or typed objects—rather than...
Multimodal AI
Multimodal AI refers to models that process and generate multiple data types—text, images, audio, and video—within a single system.
AI Agent
An AI agent is a system that autonomously breaks a goal into steps—calling tools, reading results, and adjusting course—without waiting for a human prompt.