Glossary

WWGPTD

WWGPTD? (What Would GPT Do?) started as internal Slack shorthand for a simple truth: AI isn't your guilty secret—it's step zero. These bracelets remind us that using AI isn't cheating. It's table stakes.

We believe in human creativity and AI. The value these tools bring is too significant to hide behind disclaimers. No more "I used AI for this." Just better work—sometimes faster, sometimes not, but always better.

While it's just a bracelet, we hope it helps normalize what should already be normal: using the best tools to do your best work.

Ad edificatores.1

1 To the builders.

Referenced in these posts:

The Anatomy of a CEO AI Mandate

As AI becomes the default starting point for every task, CEOs from IBM to Shopify are issuing mandates that range from theatrical pronouncements to enforceable operating frameworks. This post decodes these moves in a 2×2 matrix—Peacocks, Stagecraft, Bettors, and Executors—to help you spot who’s driving real transformation and who’s just playing to the crowd.

Things I Think I Think About AI

Noah distills his 2,400+ hours of AI use into a candid, unordered list of 29 controversial takeaways—from championing ChatGPT’s advanced models and token maximalism to predicting enterprise adoption bottlenecks—and invites fellow practitioners to discuss. CMOs can reach out to Alephic for expert guidance on integrating AI into their marketing organizations.

Transformers are Eating the World

Reflecting on three years of AI adoption, this talk emphasizes that transformative technologies like transformers are still in their infancy, requiring hands-on exploration and healthy skepticism toward confident predictions. It argues that AI acts more as a mirror—revealing our own organizational patterns and biases—than a crystal ball for the future.

Related terms:

Token

In large language models, a token is the basic unit of text—usually chunks of three to four characters—that the model reads and generates. Since API costs, context windows, and rate limits are all measured in tokens, understanding tokenization is essential for controlling prompt length, cost, and model behavior.

Transformer

The transformer is the neural network architecture introduced in Vaswani et al.’s “Attention Is All You Need” that replaces recurrence with parallel self-attention, enabling efficient training on internet-scale data. Its simple, scalable focus on attention powers state-of-the-art models across text, vision, protein folding, audio synthesis, and more.

Fine-Tuning

Fine-tuning continues training a pretrained language model on a smaller, task-specific dataset so it internalizes particular behaviors, styles, or domain knowledge. While it yields more consistent formatting and terminology than prompting alone, it requires curated data, additional training time, and can lead to loss of general capabilities.