Token
In the context of large language models, a token is the basic unit of text the model reads and generates. Tokens are not words—they are chunks of text determined by the model's tokenizer, typically three to four characters. The word "embedding" is one token. The word "unbelievable" is three. A period is one. This matters because everything in LLM-land is priced, measured, and constrained by tokens: API costs are per-token, context windows are measured in tokens, and rate limits cap tokens per minute. When someone says a model has a 128,000-token context window, that is the total budget for input and output combined. Understanding tokenization is not academic—it directly affects what you can fit in a prompt, how much a query costs, and why the model sometimes splits words in unexpected places.
Referenced in these posts:
Transformers are Eating the World
Reflecting on three years of AI adoption, this talk emphasizes that transformative technologies like transformers are still in their infancy, requiring hands-on exploration and healthy skepticism toward confident predictions. It argues that AI acts more as a mirror—revealing our own organizational patterns and biases—than a crystal ball for the future.
Things I Think I Think About AI
Noah distills his 2,400+ hours of AI use into a candid, unordered list of 29 controversial takeaways—from championing ChatGPT’s advanced models and token maximalism to predicting enterprise adoption bottlenecks—and invites fellow practitioners to discuss. CMOs can reach out to Alephic for expert guidance on integrating AI into their marketing organizations.
Don’t Let SaaS Train on Your Private Tokens
Don't let SaaS solutions train on your unique competitive advantage and protect your company's unique IP by building your own custom AI.
Related terms:
WWGPTD
WWGPTD began as internal Slack shorthand to remind teams that using AI isn’t cheating but the essential first step. The accompanying bracelets serve to normalize AI as a fundamental tool for creating better work.
Private Tokens
Proprietary organizational data and institutional knowledge that generic AI can’t access—encompassing conversational transcripts, internal documentation, digital communications, and unwritten tribal wisdom. When integrated into custom AI systems, these private tokens deliver unique customer insights, brand voice patterns, and strategic intelligence to power competitive marketing automation.
Transformer
The transformer is the neural network architecture introduced in Vaswani et al.’s “Attention Is All You Need” that replaces recurrence with parallel self-attention, enabling efficient training on internet-scale data. Its simple, scalable focus on attention powers state-of-the-art models across text, vision, protein folding, audio synthesis, and more.