A new wave of AI credentialling platforms has arrived. You can now take a live conversational assessment, receive a numerical score between 1 and 10, and add it to your LinkedIn profile. The ambition is serious and the need is real. AI fluency is becoming one of the most valuable professional skills of the decade.
But there is a problem with the credentialling approach — and it matters most for the people who stand to benefit most from working with AI.
The Problem with Scoring Fluency
Professional benchmarks are designed to answer a single question: how good are you right now? That is a useful question if you are an employer screening candidates. It is much less useful if you are trying to understand how to get better.
More critically, a snapshot score treats AI fluency as a fixed skill, like typing speed or spreadsheet proficiency. But working productively with AI is a dynamic, evolving, deeply personal capability. It is shaped by your domain knowledge, your communication habits, your tolerance for ambiguity — and, as I have been exploring in my own practice, by the cognitive profile you bring to the interaction.
Introducing AIDED-T
AIDED-T is a framework I have been developing to evaluate and support growth in practical AI fluency. Unlike a scoring rubric designed to benchmark performance, AIDED-T is designed as a developmental scaffold — a tool for understanding where you are, how you got there, and what the next step looks like.