I want to tell you about a conversation I had with Claude this week — not because the output was impressive, but because the process of getting there was.
I am a researcher. I have a postgraduate qualification in higher education. I am familiar with Bloom's taxonomy, Kolb's experiential learning cycle, and the Dreyfus model of skill acquisition. And yet I could not have produced what came out of this conversation on my own — not in the time available, and not with the cross-domain synthesis that Claude brought to it.
What I want to explore here is not the output. It is the method.
The Starting Point: A Research-Led Request
I have been working on a project to evaluate my own development as a vibe coder — someone who builds software tools using AI as the primary development engine, without writing code independently. Rather than asking Claude to simply "give me a framework," I made a deliberate decision: I asked it to go into research mode first. To consult the academic literature. To build something evidence-based before we discussed it.
This is, I think, one of the most underused approaches in everyday AI prompting. Most people ask for an answer. I asked for a research process first, and a discussion second. The distinction matters enormously.
What Claude Did: Cross-Domain Synthesis Under Direction
Claude searched across four bodies of literature — pedagogy, educational psychology, computer science education, and human-AI interaction research — and returned with a synthesis that drew on Bloom's Digital Taxonomy, Kolb's Experiential Learning Cycle, and the Dreyfus model. The resulting framework — AIDED-T — emerged from this synthesis. The acronym itself was a byproduct: Claude identified the five core dimensions, noticed the letters spelled AIDED, recognised this was apt for an AI-learning framework, and kept it.
When I expressed admiration for this, Claude was transparent about exactly how it happened — not as a stroke of creative genius, but as a pattern-recognition move followed by a decision to use what fit. That transparency was itself pedagogically useful. It modelled the kind of critical evaluation I am trying to develop.
What I Did: The Human Elements That Made It Work
1. I asked for evidence first
Requesting a research-based foundation before any framework building meant Claude was drawing on real models with real academic provenance, which I was then able to evaluate using my existing knowledge. My prior expertise became a quality check on the AI output.
2. I introduced external reference material
Sharing a commercial framework — even just its outline — gave Claude a real-world anchor for the model it was building. Bringing external material into the conversation expands the synthesis space and often produces more grounded, practically useful outputs than abstract prompting alone.
3. I pushed back and refined
When Claude proposed the initial framework, I did not simply accept it. I interrogated the comparison, asked about missing concepts, and directed the conversation towards what I thought was absent. The ability to steer rather than follow is a learnable skill, and I was practising it in real time.
4. I asked the generative question
I asked Claude how it had come up with the acronym — whether this was a creative act or something more mechanical. This produced one of the most useful moments in the conversation: a direct, honest account of the cognitive process involved.