Glossary

Fine-Tuning

Fine-tuning takes a pretrained language model and continues training it on a smaller, task-specific dataset so it learns particular behaviors, styles, or knowledge. Where prompting tells the model what to do at runtime, fine-tuning changes what the model is. The technique is useful when you need consistent formatting, domain-specific terminology, or behavior that is hard to elicit through prompts alone. Fine-tuning a model on 1,000 examples of your company's writing style will produce more reliable voice-matching than any system prompt. The cost: you need curated training data, the process can take hours to days, and a fine-tuned model can lose general capabilities it had before (catastrophic forgetting). For most enterprise use cases, the question is not whether to fine-tune—it is whether the consistency gains justify the data curation and maintenance overhead versus a strong prompt with retrieval.

Referenced in these posts:

Things I Think I Think About AI

Noah distills his 2,400+ hours of AI use into a candid, unordered list of 29 controversial takeaways—from championing ChatGPT’s advanced models and token maximalism to predicting enterprise adoption bottlenecks—and invites fellow practitioners to discuss. CMOs can reach out to Alephic for expert guidance on integrating AI into their marketing organizations.

Related terms:

Token

In large language models, a token is the basic unit of text—usually chunks of three to four characters—that the model reads and generates. Since API costs, context windows, and rate limits are all measured in tokens, understanding tokenization is essential for controlling prompt length, cost, and model behavior.

Transformer

The transformer is the neural network architecture introduced in Vaswani et al.’s “Attention Is All You Need” that replaces recurrence with parallel self-attention, enabling efficient training on internet-scale data. Its simple, scalable focus on attention powers state-of-the-art models across text, vision, protein folding, audio synthesis, and more.

Prompt Engineering

Prompt engineering involves designing and refining inputs—ranging from simple instructions to detailed system prompts with examples, constraints, personas, and chain-of-thought scaffolding—to elicit desired outputs from a language model. It’s the most accessible way to boost AI performance, requiring no training data or ML expertise, but prompts can be fragile, hard to version-control, and easy to overfit.