A Perspective

A Perspective

In my last post, I considered that we need to make haste slowly. To stand back a little and go back to first principles. What is really going on here? What can we learn from what has happened before? Are we really where we think we are, or are we confused by the sheer speed of the change we are experiencing?

Over the next few posts for the remainder of this month and into December, I'm going to do that; make haste slowly. My intention is to think out loud with you, not to be right. I'm hoping we can pick aspects of this up in our Wednesday evening calls and see where this takes us.

And the thought I am holding as I start is the evolving nature of our emotional relationship with technology.


When The Looms Fall Silent


Something curious happened during the Industrial Revolution that nobody much talks about.

We all know the story of skilled weavers replaced by children operating power looms, of craftsmen made obsolete by machinery, but there's a second act to that story that feels particularly relevant at the moment. The machines grew complicated to the point where the children couldn't manage them, and so the factory owners were compelled to search for the very craftsmen they'd displaced, recruiting them back not as weavers but as overseers, mechanics, and engineers, those who could understand and optimise these increasingly complex systems. 

It strikes me that we're living through our own version of that Victorian cycle, only compressed into years rather than decades, and targeting not our hands but our minds. The parallel isn't perfect, of course, few historical parallels are, but it's instructive.

The first wave of AI tools, like those early power looms, seemed laughably simple. Chatbots that could barely hold a conversation, and recommendation engines that suggested buying another toaster when we'd just bought a toaster. So we felt we could dismiss them, much as those Victorian craftsmen might have initially scoffed at machinery operated by children, but the technology is expanding at an exponential rate. What took the Industrial Revolution fifty years, AI is accomplishing in five.

Carlota Perez's work on technological revolutions identifies a consistent pattern: first comes the installation period, creative destruction, chaos, displacement, and financial speculation. Then, after a crisis, comes deployment, creative construction, the emergence of new skills, new roles, new ways of being.

We're in what Perez calls the "frenzy" phase of installation. Money floods into AI ventures. Companies desperately implement AI solutions without quite knowing why, and we watch nervously as systems grow capable of tasks that seemed uniquely human just months ago. It's the children at the power looms all over again, except the looms are processing words and ideas rather than thread.

There's something different this time, though, that goes beyond mere economic displacement. The Industrial Revolution, for all its disruption, left the human mind largely untouched as the unique domain of thought and creativity. Workers might have lost their craft skills to technology, but they didn't lose their sense of what made them human. AI targets precisely that preserve.

I'm currently reading Olli-Pecka Heinonen's book "Learning as if Life Depended on It" and finding it a compelling read by this Finnish Minister, who now heads up the International Baccalaureate. Early on, he mentions Karl Popper's framework of the three worlds that shape our worldview: Physical reality, subjective consciousness, and the realm of objective knowledge, which forms our culture or collective understanding. 

Previous technological revolutions primarily disrupted the first world, the physical realm of production; AI simultaneously disrupts all three. It doesn't just automate tasks; it challenges our subjective sense of unique human cognition whilst generating cultural products, art, music, and literature that we thought were exclusively human.

It is deeply discombobulating. It's not just job anxiety, though that's certainly present. It's something deeper, something like epistemic vertigo, a dizzying uncertainty about what knowledge means, what expertise is worth, and what makes us valuable.

I do not believe, once the hype settles, that anything other than the most routine of jobs will be totally replaced. It does, though, ask a fundamental question of us as professionals. We were educated, trained, and developed to be holders of knowledge, but now, when the knowledge is ubiquitous, our role changes. It becomes the way we hold that knowledge in a relationship with whom we're working, and I think it changes everything.

It is the compression of time that is the critical making this revolution qualitatively different. The Victorian craftsman who lost his position to machinery might reasonably expect his children to find new roles in the industrial economy. There was time for cultural adaptation, to learn new skills that might underpin a working life, and for society to develop what we might call cognitive scaffolding around the new reality, but we seem to be on some manic version of Moore's Law, with power and capability developing by the day. By the time someone masters prompt engineering, the need for it may have evolved into something else entirely.

It is what J.P. Castlin calls "dynamic uncertainty", the near-instant feedback loops that change the system even as we act on it, and the psychological toll of this is something we're only beginning to understand. To go back to Popper, it is stretching the relationships between the world we grew up in and the world that is evolving, to breaking point. It is increasingly difficult to know what to believe, who to follow, and who to trust.

Despite this, I'm beginning to see the emergence of what I call been calling New Artisans, although they don't often recognise themselves as such yet. They're people who are grounded in what they do and why it matters. They do not have a deep understanding of AI itself, because none of us have that yet, but they do of the emerging nature of conversation between human and artificial intelligence. They know when to trust the machine and when to override it. They understand the subtle art of prompt construction not as a technical skill but as a form of generative dialogue.

It is both art and discipline. The better the technology becomes, the more seductive it is, and the easier it becomes when we're under pressure to delegate to it. Understanding that the dividing line is key. It is central to whether we can exercise our own unique creativity or whether we just follow the bouncing ball of somebody else's input.

It isn't just "prompt engineering", though that's part of it. It's something more akin to what the Victorians might have called "mechanical sympathy", that intuitive understanding of how systems behave, their capabilities and limitations, their rhythms and requirements. Driving an electric car may be a compelling necessity, but the relationship with the power unit is nothing like that which we have with a well-designed, built, and tuned internal combustion engine. 

Perhaps now it's about learning "algorithmic sympathy", a feeling for how AI systems process information, where they excel, where they don't, and where it's down to us. Perez's framework suggests that technological revolutions require institutional innovation to move from chaotic installation to beneficial deployment. The Factory Acts, technical colleges, and the welfare state were institutional responses to industrial disruption.

We're still waiting for equivalent innovations for the AI age. Our educational institutions still prepare people for a world of stable expertise. Our professional bodies still certify knowledge that AI can access instantly, and our labour laws still assume clear distinctions between human and machine work. 

The data is informative. Whilst organisations are yet to show real benefit from using AI, the people inside it are becoming more familiar with it and using it on their own terms. Quite often, they can't use that information inside the organisation because it's prohibited, and I suspect that the emotional relationship we have with AI will develop much faster with individuals than it will with organisations who are afraid of it, whose only reason for using it is a short-term increase in profits rather than a long-term improvement in their place in the world.

I think the deepest challenge we face is existential rather than economic. Work provides more than income; it provides identity, structure, social connection, and purpose. When AI can perform cognitive tasks we've built our identities around, it doesn't just threaten employment, it threatens our sense of self. 

The Victorian craftsmen who returned to the factories didn't return as better weavers than the machines. They returned as something the machines couldn't be: improvisers, problem-solvers, the ones who could handle exceptions, breakdowns, the unexpected. They brought judgment, context, what we might call wisdom, qualities that emerged not despite the machinery but in relationship with it.

So, where does this leave us? If the parallel holds, we should expect the emergence of new forms of skilled work, not immediately, but after a period of disruption and institutional adaptation. These won't be the old skills dressed up in new clothes but genuinely novel capabilities that emerge from the intersection of human and artificial intelligence. The psychological adaptation required is immense. We're not just learning new skills; we're reconstructing our understanding of what it means to be human in an age of artificial intelligence. 

As I'll come in my next post, I think it requires a total rethink on what it means to learn, and the role of education from a service industry to a defining worldview.

The Industrial Revolution eventually led to improved living standards, new forms of work, and human flourishing, though the path was neither straight nor painless. The same may prove true for the AI revolution. But the compression of time, the psychological depth of disruption, and the institutional lag create a uniquely volatile situation.

We're living through the messy middle, that period when the old certainties have dissolved but new patterns haven't yet crystallised. It's uncomfortable and disorienting, but it's also a space of possibility, where new boundaries between human and artificial intelligence are being negotiated in real-time.

Craft will return, although it won't look like we expect. It will not be competing with technology but dancing with it, finding ways to blend human intuition and the tools we have newly available. It's about new literacies, new sensibilities, new ways of being human in a world of thinking machines.

And rethinking what learning means,

The question isn't whether we'll adapt; humans always have, but whether our institutions, our psychology, our sense of meaning can evolve quickly enough to prevent widespread disruption and suffering. The Victorian experience suggests we'll muddle through, though not without cost. But then again, they had decades where we have years.

Perhaps that's the real lesson from history: not that the pattern will repeat exactly, but that humans have consistently found ways to remain relevant, necessary, even irreplaceable, not by competing with machines but by continuously discovering what makes us most human. The looms may fall silent, craft, in new and unexpected forms, returns.

Craft is the thread that connects every technology change we have experienced