I was talking to a friend today about where AI actually is right now — not where the hype says it is, where it actually is. We landed on something I’ve been sitting with for a while: the people getting real value out of these tools are already experts in their field.
That’s not a shot at beginners. It’s an observation about how the tools actually work.
The front end is where it shines
Where I get the most out of AI is at the very front of a project. Brainstorming. Pressure-testing an idea. Mapping the space before I commit to a direction. Drafting a plan and then arguing with myself about it through the model.
But the quality of that conversation is entirely dependent on what I bring to it. If I can frame the problem, ask sharp questions, and name what I’m actually trying to accomplish, the output is excellent. If I can’t, I get something that sounds smart and means nothing.
Prompt engineering is a real skill, but it’s downstream of a more basic one: knowing what you want and being able to say it out loud.
You have to manage it like a person
The other thing I keep coming back to is that working with AI looks a lot more like managing people than I expected.
You set context. You give feedback. You catch mistakes early before they compound. You push back on things that sound right but aren’t. And you edit the work.
That last part is where most of the slop gets in. The model produces something confident, polished, and subtly wrong. If you don’t have the domain expertise to catch it, you ship it. Then you wonder why your output feels generic.
Editing AI work well requires the same thing that editing a junior employee’s work requires: you have to know what good actually looks like.
Why this favors experts
The people I see getting the most out of AI are curious, creative, and unafraid to ask bad questions until they arrive at good ones. They have taste — they can feel when something is almost right but not quite, and they can articulate what’s wrong. And they’re good communicators with themselves, which is the part nobody talks about. They know what they’re trying to do before they open the chat window.
If you’re an expert in your field and you’re willing to manage the thing like a new hire, AI will meaningfully change your output. If you’re a beginner hoping the model will cover for what you don’t know, you’ll get something that looks like work and isn’t.
A personal example
I built a SaaS platform end-to-end using Claude Code. Not as a novelty — a real, production system with AI document processing and dozens of backend functions running through it. I’m not a career software engineer. But I know what the product needed to do, I’ve reviewed enough code over the years to know what good looks like, and I can tell when the model is about to do something stupid.
Without the domain understanding and the editorial eye, I would have shipped a mess. With them, I shipped something I’m proud of — faster than I could have any other way.
That’s the trade. AI doesn’t replace the expert. It lets the expert move at a speed that wasn’t previously possible. And it punishes people who try to use it as a substitute for learning the craft.
TL;DR — The Bottom Line
If you’ve been following AI at all, you’ve heard the line: “AI is only as good as the data it’s trained on.” That’s true, but it’s only half the picture. The data is one piece of context. The other piece — and the one nobody’s talking about enough — is you. Your proficiency. Your expertise. How clearly you’re thinking about what you’re trying to accomplish, how well you’re managing the tool on the front end, and how sharp your eye is when you’re editing the output on the back end.
The data determines what the model can do. The person determines what it actually does. That’s the whole point of this article — and honestly, that’s where the game is right now.