Embeddings
Embeddings are numerical representations of text—vectors of hundreds or thousands of floating-point numbers—that capture semantic meaning in a form machines can compare mathematically. Two sentences about the same concept will produce vectors that are close together in this high-dimensional space, even if they share no words. This is what makes semantic search work: instead of matching keywords, you match meaning. Embeddings power recommendation engines, clustering, anomaly detection, and the retrieval half of RAG architectures. The quality of your embeddings determines the quality of everything downstream. A bad embedding model will retrieve irrelevant documents, and no amount of prompt engineering on the generation side will fix that. Choosing the right embedding model for your domain—and evaluating it rigorously—is unglamorous work that separates systems that actually perform from ones that demo well.
Referenced in these posts:
Noah on Bloomberg: Claude Code and the AI Coding Boom
On Bloomberg’s Odd Lots, Noah Brier highlights Claude Code as a “computer within your computer,” using file system access and Unix commands to bypass...
Software for One
I built Aesthete—a local-only Typescript/NextJS/Tailwind site powered by Claude Code and Anthropic frontend design skills—to curate nearly 200 aesthetically...
Related terms:
Context Window
A context window is the maximum amount of text a language model can process in a single call—input and output combined—measured in tokens.