,

Apr 1, 2026 | 4 Minute Read

Everyone Has the Tools Now. Why Is the Output Still So Different?

Table of Contents


The bottleneck has shifted. It is no longer access to information, tools, or capability. It is the will to act on what you have.


For most of the last decade, the limiting factor in knowledge work was access.

Access to the right information. Access to the right frameworks. Access to someone who could draft the thing, analyse the dataset, or synthesise the research quickly enough to be useful. Getting smart fast on a new topic meant hours of reading, waiting for the right person to be available, or accepting that some things would just take longer than you wanted.

That constraint has largely dissolved.

Within seconds, you now have information, frameworks, first drafts, structured analysis, competitive research, and synthesised options — available on demand, across almost any domain. The access problem is solved.

Which means the bottleneck has moved.

We now have everything at our fingertips. The question is no longer can I access what I need? It is what do I do with it?

— Hetal Mistry, Head of Delivery, Axelerant

This shift sounds like good news. It is. But it also surfaces something that was always true and is now more visible than ever: having the tools is not the same as using them well. Abundance reveals character in a way that scarcity never had to.

The Gap Nobody Talks About

Walk into any organisation navigating an AI transition and you will find two groups of people.

The first group has embraced the tools. They are generating faster, iterating more, exploring things they previously did not have time to touch. Their output is measurably different from six months ago.

The second group has the same access. Same tools, same licences, same training resources. Their output looks largely the same as it did before.

The difference between these two groups is not intelligence. It is not even technical skill. It is something harder to name but easier to recognise: the disposition to actually move.

In the work of studying high-performing individuals across fields, researcher and writer George Mack arrived at a simple formulation for this disposition. He called it high agency — and defined it as three things working together:

- Clear thinking. The ability to see a situation accurately, without the distortions of wishful thinking, social pressure, or the comfort of familiar assumptions.

- Bias to action. The default orientation toward doing rather than waiting. Not recklessness — considered movement. The question is not "should I act?" but "what is actually stopping me from acting right now?"

- Disagreeability. The willingness to hold a position, pursue a direction, or make a decision even when it is uncomfortable, unpopular, or uncertain. Not contrarianism — conviction.

Remove any one of these and the formula breaks. Clear thinking without action is analysis paralysis. Action without clear thinking is noise. Both without disagreeability collapse the moment they meet resistance.

Why AI Makes This More Urgent, Not Less

There is a version of the AI story where the tools do the hard part and everyone benefits equally. More access, more output, a rising tide.

That version is half right.

AI does expand what is possible. But it expands possibility for everyone — which means it does not change relative position on its own. The person who was already moving fast moves faster. The person who was already stuck has more sophisticated tools to stay stuck with.

What AI actually does is make the agency gap wider and more visible.

The high-agency individual in an AI-enabled environment is operating with a multiplier that did not exist two years ago. They are synthesising faster, building faster, course-correcting faster. The compounding of small moves — each enabled by AI, each building on the last — is dramatic over months.

The low-agency individual in the same environment has access to all the same tools. But the tools do not supply the disposition to use them. They do not supply the clear thinking that knows which problem is worth solving. They do not supply the bias to action that starts before conditions are perfect. They do not supply the disagreeability that pushes through when the first version fails.

AI brings abundance. High agency is what turns that abundance into outcomes.

What This Looks Like in Practice

This is not an abstract principle. At Axelerant, we see it expressed every week in specific, concrete ways.

Clear thinking shows up when an engineer writes a specification detailed enough for an AI agent to act on reliably — because they have done the work of understanding the problem deeply before picking up a tool. It shows up when a delivery lead looks at a client engagement and names the real constraint, rather than the comfortable one.

Bias to action shows up when a team member builds a working prototype before the process to build it exists. When someone ships a Claude skill, shares it in Slack, and iterates based on real feedback — rather than waiting for the right forum, the right approval, or the right moment. The tools are available. The question is whether you start today or next week.

Disagreeability shows up when someone pushes back on a brief that is underspecified, even when the client is impatient. When a team presents a structural recommendation the client did not ask for, because the data says it is right. When an internal process is challenged because it no longer fits how work actually gets done.

None of these require AI. But AI amplifies each of them — and makes the absence of each more costly.

The Honest Self-Assessment

For anyone navigating an AI transition — whether inside an organisation building with AI, or a leader evaluating what it means for their team — there are three questions worth sitting with honestly.

On clear thinking: when you pick up an AI tool to work on something, do you know precisely what you are trying to produce and why it matters? Or are you using the tool to avoid doing that thinking?

On bias to action: is there something you have been meaning to try, build, or change for more than two weeks — and the real reason it has not happened is not the tool, not the access, not the time, but the decision to start?

On disagreeability: is there a direction you believe is right that you have not pursued because it would require a difficult conversation, an unpopular position, or the discomfort of being wrong publicly?

These are not comfortable questions. They are not meant to be.

"Build the AI-native mindset. But do not confuse having the tools with using them well."

The organisations and individuals who will look back at this period as the moment things changed are not the ones who got access to AI first. They are the ones who had — or built — the agency to use it.

The tools are ready.

The question is whether you are.

 

About the Author
Hetal Mistry, Director Of Global Delivery

Hetal Mistry, Director Of Global Delivery

Passionate about storytelling and history, I love reading and exploring music. Family time is essential, and I enjoy decluttering. Protecting my sleep and meals keeps me happy!


Leave us a comment

Back to Top