Thinking Ahead, Building Ahead

Charles Gallant, June 12, 2025

Back in 2003, I can remember seeing friends type full questions into Google. Queries like “Where is the best place to take parents to dinner in Manhattan?” would fail completely. We had to learn to bend our thinking around Google’s limitations. Eventually, we became interpreters, converting our questions into strings and syntax.

SUBSCRIBE TO OUR NEWSLETTER

While all interfaces require translation, early Google Search stood above the rest as a particularly pure example of humans adapting their instincts for the needs of the computer. Reduce the number of keywords. Homogenize your phrasing. Append the word "reddit" for human testimony.

Born out of language itself, LLMs have demolished this translation barrier. Sure, prompting is still an art for crafting specific outputs, but for everyday use, we’re free of syntax. The technology finally allows us to think WITH it.

Embrace the Overwhelming

With this barrier removed, it’s on us to shed old habits. It's actually beneficial to adopt opposite behaviors: To overwhelm AI with information, and overestimate what it’s capable of.

Stop worrying about overwhelming the system. Write whole paragraphs. Throw multiple documents into the context window. The more context you provide, the better the output.

Lose the reflex to homogenize. Build the habit of asking anything, then refine only when necessary. The old rules of careful query construction are artifacts of limited systems.

Assume failures are temporary. When AI stumbles, those who dwell on the failure often miss partially usable output. Don’t let it slow you down. What fails today succeeds tomorrow.

From Thinking Ahead to Building Ahead

It's tempting to focus solely on AI's best capabilities and overlook what it can do imperfectly. A winning strategy is emerging amongst builders: Engage with what AI can do partially and build for when the partial becomes complete.

build-ahead-diagram.svg

The alternative is a reactive approach, and one that immediately puts your software in a race against everyone else with the same idea (not to mention, the same frantic refactoring schedule).

If you've placed your bets correctly and found ways to allow AI to scale within your app, you'll already be in the market when a new model release changes the game. This isn't theoretical. Cursor proved it. On a recent episode of the Dwarkesh Patel podcast, Sholto Douglas, who works for Anthropic, explained that: "Cursor hit PMF with Claude 3.5 Sonnet. They were around for a while before, but then the model was finally good enough that the vision they had of how people would program, hit."

They built their vision before the models could deliver, so when the crossover arrived, they were ready.

The same pattern played out with tool calling. A year ago, function calling was fragile—models would hallucinate parameters, forget to invoke tools, call them in nonsensical loops, or, often most frustratingly of all, insist they didn’t have access to the thing you knew they had access to. Teams relying on tool calling looked reckless. Today, reasoning models can orchestrate tool calls with ease, and now the functionality serves as the foundation for the recent explosion of agents.

Think about that: every major AI product now depends on capabilities that were "broken" twelve months ago.

Why it Works

There is a foundational text in AI called “The Bitter Lesson” by Richard Sutton that continually asserts its relevance. It teaches us that "general methods that leverage computation are ultimately the most effective, and by a large margin." The same principle applies to AI products. Don't optimize for today's limitations. Bet on the curve.

When you build ahead:

  • Your product has headroom for models to improve
  • You're at the margins of what's possible
  • You have time to adjust, refine, and be wrong

When you build for perfection:

  • You're stuck reacting to what AI has already done
  • Your architecture assumes static capabilities
  • You're competing on yesterday's playing field

Practical Guidelines

Ship uncomfortable products. There’s a famous quote in the startup world that if you aren’t embarrassed by the first version of your product, you’ve launched too late. There’s a potential corollary in the world of AI that if your product doesn't feel slightly broken, you're not building far enough ahead.

Allow for AI headroom. When models update, assume that inference and reasoning will continue to improve. Assume the rendered output will need to be more dynamic. Assume other Agents might even be involved.  

Expose the reasoning. By revealing how the AI arrived at the answer, you’re helping users understand failure/success points and giving yourself a breadcrumb trail to follow if things go awry. 

Let users complete the picture. Hedge your bets by giving users a chance to approve/decline the output. Allow the AI to get it partially correct without breaking. As a side effect, analytics related to these corrections will be priceless. 

The New Era

The awkward days of LLMs will soon be behind us, and the translation barrier is gone. Reasoning models are thriving, and when given room to scale within the right architecture, will usher in a new era of software.

The advantage belongs to those who treat current capabilities as a floor, not a ceiling. Those still perfecting yesterday's limitations are on their heels.

Build ahead. Ship ahead. The models are catching up faster than you think.