Conway's Law
Conway's Law states that "organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations." First articulated by computer scientist Melvin Conway in his 1968 paper "How Do Committees Invent?", the law reveals how organizational structure fundamentally shapes the systems we create. The classic example is a company with separate sales, marketing, and support departments inevitably designing a website with "Shop," "Learn," and "Support" navigation—mirroring internal divisions rather than user needs.
The law operates through what researchers call the "mirroring hypothesis"—the tendency for technical dependencies in systems to reflect organizational ties. This happens because mirroring conserves cognitive resources: when system architecture matches organizational structure, it's easier for teams to understand, communicate about, and maintain the system. Research has found evidence of this phenomenon across industries from software and semiconductors to automotive, banking, and construction.
Organizations have three main responses to Conway's Law: accept it as potentially optimal, seek a balanced approach that maintains some beneficial mirroring while avoiding its worst effects, or engage in "strategic mirror-breaking" (sometimes called an "inverse Conway maneuver") where they deliberately restructure their organization to achieve a desired system architecture. The implications extend beyond system design—as Rebecca Henderson showed in her work on "architectural innovation", organizational structure can fundamentally limit a company's ability to innovate when new technologies require new organizational forms.
Referenced in these posts:
The Alephic AI Thesis: 2025
The AI revolution will be dictated by three physical constraints—compute packaging capacity, energy availability, and organizational agility—that concentrate power in gravity wells. Whoever controls these choke points, not merely the best models, will shape the next decade of AI.
On Bureaucracy
This post examines how growing organizations can devolve into bureaucratic labyrinths that stifle innovation, drawing on insights from Olson’s theory to Conway's Law. It highlights real-world examples and strategies for streamlining processes, ensuring that the structures meant to ensure efficiency don’t ultimately hinder creativity.
The SaaS Industrial Complex: When Their Mirror Becomes Your Mold
Just as Eisenhower warned of a powerful military-industrial alliance and Conway observed that systems mirror their creators’ communication structures, today’s sprawling SaaS ecosystem quietly imposes vendor-defined processes on every company. By harnessing AI and proprietary “private tokens,” enterprises can escape this one-size-fits-all mold and build software that truly reflects their unique DNA and strategic edge.
Related terms:
Temperature
Temperature is a parameter controlling a language model’s randomness: at 0 it always picks the most probable next token for deterministic, reliable output, at 1 it samples more broadly for varied, creative results, and above 1 it becomes increasingly random. Choosing the right temperature (e.g., 0 for consistent data extraction or 0.7–0.9 for brainstorming) balances reliability and diversity.
Inference
Inference is the process of running a trained model on new input to generate a prediction or output—such as sending a prompt to GPT-4 and receiving a response. Unlike training, which is costly and infrequent, inference occurs millions of times per day, with speed (tokens per second) and cost (dollars per million tokens) determining an AI feature’s responsiveness and economic viability.
Accretive Software
Accretive software refers to AI platforms that automatically absorb model improvements as margin expansion by treating models as interchangeable components and routing queries to the optimal model in real time. Rather than fighting obsolescence, these platforms convert every efficiency breakthrough into customer value or profit margin.