David, a Welsh Microsoft Guy
Back to Blog
27 March 2026Video

Where Does Differentiation Actually Live Now?

podcast
differentiation
ai
strategy
leadership
video
cloudy-with-a-chance-of-insights
davids voice

This question came up on a recent episode of Cloudy with a Chance of Insights, the fortnightly podcast I record with Richard and Cyrus. Riverside created a short clip of my thoughts on it, which is shown above.

The observation that triggered the discussion was fairly straightforward. Richard built a website recently using GitHub Copilot. I built one using GitHub Spark. The two outputs were, visually, almost carbon copies. Same structure, similar layout, very comparable feel. Different tools, different people, different approaches - and yet the result looked like the same hand had made both.

That is interesting on its own. But it also points to something broader.

There is a lot of discussion at the moment about how roles will change in the AI era. Uli Homann covered this recently on the Armchair Architects podcast, and it is worth your time. The framing that tends to dominate is about which tasks get automated and which skills become more or less important. That is a useful conversation to have. But I think my observation sits at a slightly different level.

If we are all using the same models, trained on the same data, with increasingly opinionated frameworks underneath, and we can all reach a "good enough" output very quickly - where does genuine differentiation actually come from?

AI is acting as a leveller, and on balance that is a good thing. More people can build more things, more quickly than they ever could before. The floor has risen. But when you raise the floor, you also compress the space in the middle. In that compressed space, being competent is no longer particularly distinctive, because the tools help you reach competence faster than before. If you are trying to stand out, incremental improvement in execution probably is not enough anymore.

So where is differentiation actually starting to show up? I am beginning to see three areas that matter more now than they did.

The first is in how things are composed. Not the individual components, but how they are brought together. How capabilities are orchestrated. How state is managed across a workflow. How a solution connects into real systems that carry constraints, governance responsibilities, data boundaries, and regulatory implications. These are not things a model generates for you automatically. They emerge from thinking carefully about how something actually needs to work in a real operating environment, not how it might theoretically function in an isolated demonstration.

The second is context. Understanding the domain you are operating in at a level that no prompt can fully express. Regulations, operating models, organisational history, existing constraints that cannot simply be optimised away because the model is not aware of them. This kind of context is not visible in the final output. It shapes what gets built and what does not, but it leaves very little obvious trace in the artefact itself.

The third is judgement. Deciding what not to build. Placing constraints deliberately rather than incidentally. Defining what "good" actually looks like in a specific situation, rather than in a general or theoretical one. This is arguably the most consequential of the three, and it is also the one that is hardest to demonstrate from the outside.

What strikes me about all of this is that two systems can produce outputs that look almost identical on the surface and yet be profoundly different in terms of how well they will hold up, scale, or respond when the real world pushes back. The visible output tells you very little about the quality of the reasoning that produced it.

I do not think this means building things becomes less important. It just shifts where the value sits. Less in the artefact itself, and more in how it was framed, how it was assembled, and how it will actually operate once it is out in the world.

So the question I keep coming back to is not how to write a better prompt. The question is how to develop a point of view that the prompt alone cannot provide. That is harder to demonstrate, harder to quantify, and considerably harder to replicate than any individual technical skill.

And I think that is where the meaningful difference is starting to live now.

**#CloudyWithAChanceOfInsights #AI #Strategy #Differentiation

Continue exploring

Explore the topic graph

Comments