Can you state the problem so completely — with so much surrounding information — that a capable system can solve it without needing to gather more context?
Tobias Lütke, CEO Shopify — The foundational principle of effective AI collaboration
Core Definition
Prompting is the broad skill of providing input into AI systems so that they can do useful work. But what was once a single conversational skill has evolved into four distinct disciplines. Each operates at a different level of abstraction, with different stakes, different time horizons, and different skill requirements.
The foundational insight — articulated by Tobias Lütke — is deceptively simple: humans are generally sloppy communicators who rely on shared context that often does not exist. AI forces us to be precise. The question is no longer just "what do I ask?" but "what does the system need to know, and how do I architect that information environment?"
The Four Disciplines
Prompting has expanded into four distinct layers, each building on the last. Understanding which layer you are working at determines the skills needed, the stakes involved, and the time horizon of your decisions.
The Information Architecture Insight
The key insight of context engineering is not just to provide information — it is to design an infrastructure so the right information is available for the right task at the right time, without flooding the context window with irrelevant material. This is analogous to the difference between giving someone a filing cabinet full of documents and giving them a well-organised knowledge base with good search.
Context engineering includes: system prompts and tool definitions that establish the operating environment; retrieved documents and message history that provide relevant prior context; memory systems that persist information across sessions; external connections (MCP servers) that enable real-time data access; and project files, conventions, and task definitions that establish shared understanding between human and AI.
Most failures in AI-assisted work are not model failures — they are context failures. The model didn't have the right information, or had too much irrelevant information, or lacked the right framing to understand what success looks like. Context engineering addresses this systematically rather than through trial-and-error prompting.
Applications & Implications
Understanding these four disciplines has significant practical implications for how individuals and organisations approach AI adoption. Most current AI training focuses exclusively on prompt craft — the session-based, individual-skill level. This misses the higher-leverage disciplines of context engineering and intent engineering, which determine whether AI systems behave reliably and appropriately across diverse situations.
For neurodivergent users in particular, context engineering is especially powerful. ADHD is characterised by challenges with working memory, task initiation, and maintaining context across interrupted work sessions. A well-engineered context architecture can serve as an external working memory system, maintaining continuity that the individual brain cannot reliably provide.
For organisations, intent engineering represents the critical frontier: how do you encode your values, culture, and decision-making priorities into AI systems that will act on your behalf? This is not a technical question alone — it is a governance, ethics, and leadership challenge of the first order.