There is a conversation happening right now in workplaces, universities, and living rooms that tends to get framed in exactly the wrong way. The question people are asking is: which jobs will AI take? The question worth asking is different: what does it mean to know something, now that knowing is almost free?

These aren't the same question. The first is about economics. The second is about something stranger and more important — about what expertise actually is, what it was always for, and what survives when the part that could be automated gets automated.


The Expensive Era Is Over

For most of human history, expertise was expensive to acquire and expensive to access. If you needed to know something — how a contract should be structured, whether a rash was serious, what the precedents were in a legal case — you needed someone who had spent years acquiring that knowledge, and you needed to pay for their time.

This wasn't incidental to how expertise worked. It was constitutive of it. The value of being an expert was partly the value of being rare. A doctor knew what a patient didn't know. A lawyer understood a system the client couldn't navigate alone. A financial advisor held information that wasn't available elsewhere, or at least wasn't available in a synthesised, usable form.

What AI has done — and continues to do, rapidly — is make the information component of expertise cheap. Not free, not perfect, not without risks of hallucination or misjudgement, but cheap enough that the architecture of traditional expertise is genuinely in question. Anyone with a decent AI assistant can now ask complex medical, legal, or financial questions and receive responses that, in terms of raw informational content, rival what a junior professional would produce.

This is not a threat on the horizon. It is the current reality for anyone who has actually used these tools seriously.


What Expertise Was Always Made Of

Here is the thing that gets missed in most of these conversations: expertise was never just about holding information. It was always made of several things at once, and information was only one of them.

The others are harder to name but easy to recognise. There's judgment — the ability to know which piece of information matters in a specific situation, when the textbook answer doesn't apply, when something feels wrong before you can say why. There's what you might call integrated attention — the capacity of a skilled diagnostician, editor, or engineer to perceive a whole and notice the detail that doesn't fit, not by running through a checklist but because pattern recognition has become unconscious and fast. There's accountability — the weight of standing behind a decision, of having your name on the recommendation, of caring about the outcome because it connects to your identity and your record. And there's relationship — the part of expertise that is expressed not in information but in timing, in the right question, in knowing when not to speak.

AI has commoditised the first component. It cannot, in any meaningful sense, replicate the others.

This is not technological optimism or wishful thinking. It is a distinction that matters if you're trying to understand what is actually happening to expertise right now.


The Paradox of Abundant Information

There's something counterintuitive in what AI abundance does to the value of certain human skills.

You might expect that if AI can answer questions accurately, the ability to find answers becomes less valuable. This is true. But it turns out that what becomes more valuable — considerably more — is the ability to ask the right questions in the first place.

Knowing what to ask requires a mental model of the domain. It requires understanding not just what you want to know but what you don't know you need to know. It requires the ability to evaluate an answer and recognise when it's subtly wrong or technically correct but practically misleading. These are not skills AI teaches you by doing the work for you. They are skills that atrophy if you let the AI do everything, and strengthen if you use it to accelerate your own thinking rather than replace it.

The people who benefit most from AI tools tend to be those who already have meaningful domain knowledge. A senior doctor using AI to scan the literature gets better faster. A junior doctor who uses AI to avoid developing clinical judgment gets competent at using AI, not at medicine. The tool amplifies what you bring to it. If you bring very little, it has less to amplify.

This is the quiet paradox of the moment: AI makes expertise more accessible to non-experts while simultaneously making genuine expertise more valuable. Not less.


What We're Actually Losing

It would be dishonest not to acknowledge what is genuinely at risk.

The traditional pathway to expertise involved doing a lot of work that AI now renders unnecessary. Writing the memo that took three hours when a junior lawyer was figuring out how to structure an argument. Doing the literature review that a researcher spent a week on before they understood the landscape well enough to contribute to it. These tasks were partly about the output and partly — arguably mainly — about the process of developing capability through difficulty.

If AI handles all of that scaffolding, the question becomes: how do novices develop into experts? What is the path from knowing nothing to knowing enough to ask the right questions?

This is a real problem and no one should pretend otherwise. The answer isn't to avoid AI tools — that's neither realistic nor desirable. But it does require a shift in how learning is structured. The emphasis needs to move from information retrieval to application, from recall to evaluation, from answering questions correctly to interrogating whether the question itself is the right one.

Educational institutions are slowly recognising this. Workplaces are learning it more urgently, through experience. The adaptation is happening, but unevenly, and with considerable friction.


The Skills That Survive

Some things are not made redundant by AI being good at information. They may even become more important as the informational floor rises.

Curiosity, for one. Not the directed curiosity of efficient research — AI handles that well — but the kind that makes unexpected connections, that notices an anomaly in a field you weren't supposed to be paying attention to, that pursues a question for its own interest rather than because it is instrumentally useful. Curiosity produces the questions worth asking. It is not a database skill.

Communication, for another. Not the ability to write grammatically correct sentences, which AI can do reliably, but the ability to understand what another person actually needs when they ask you something, and to respond in a way that is useful to them specifically. This is a human skill that scales with attention and care, not with information.

And then there is what might simply be called care — the investment in an outcome that comes from it being yours. An AI tool produces the same quality of output whether the stakes are low or high. Humans perform differently — often better — when something matters. When a patient is in front of you. When your client is depending on the advice. When your name is on the work. That connection between doing something and it mattering is not a limitation of human psychology. It is the source of a great deal of what makes human expertise worth having.


A Better Question

The conversation worth having isn't about which jobs survive AI. Most jobs will survive in some form, and most will change substantially. That's been true of every previous technology that reshaped the economics of doing things.

The more interesting question is about what we choose to keep doing ourselves, and why.

Some tasks will be fully automated and should be. The hours junior staff used to spend on routine research and initial drafts can be recovered for things that genuinely require human attention. This is a gain, not a loss, if the freed capacity is used for something that needs it.

But there is a version of AI adoption where the automation keeps expanding inward — where we keep delegating the next layer of judgment, the next layer of evaluation, the next layer of decision — until the human in the loop is mostly just approving AI outputs without the depth of understanding to know when they're wrong.

That version is worth resisting. Not because AI is bad, but because the slow erosion of the capacity to judge is hard to notice and hard to reverse. The endpoint is not expertise augmented by AI. It's the appearance of expertise, built on a foundation that isn't there.

The tools are genuinely impressive. Use them. But stay curious about what you still don't understand. Keep asking questions you don't already know the answer to. And notice when you're thinking alongside the AI versus when you've stopped thinking and started approving.

That distinction, maintained deliberately, is what the age of AI actually requires of human experts. It's more demanding than it sounds. It's also, perhaps, more interesting.