LLM Engineering Mindset

Coding Is a Commodity. Now What?

How the rise of AI-assisted development is redefining what it means to be a good developer.

Three months ago, I ran an experiment on myself. I tracked every project, every commit, and every line of code I shipped. The results:

  • 5 projects
  • 353 commits
  • 496,558 lines added

Roughly 90% of that code was generated by AI.

When I say coding is a commodity, I mean it in the economic sense: a product that's abundant, interchangeable, and no longer a source of competitive advantage. Just like cloud compute went from rare expertise to a utility you rent by the hour, writing code is heading the same way. The ability to produce working software is no longer scarce. The tools to generate it are available to everyone, and the output is increasingly indistinguishable.

So if everyone can code now, what separates the good developers from the rest?

After shipping KubePath (an interactive Kubernetes learning platform), Blog Agent (an AI-powered writing system with a 15-dimension quality pipeline), and SecretToEnv (a Kubernetes secrets converter), I have some answers. Most of them came from things that went wrong.

You're Not a Coder Anymore. You're a Manager.

When building Blog Agent, I asked the LLM to create the writing pipeline. The system needed to research a topic, analyze the top 10 existing articles, find content gaps, and write something differentiated.

The first version worked, technically. But it treated every step as independent. The research node fetched sources, the writing node wrote content, nothing connected them. The writer didn't reference the research. The critic didn't check against the gaps the researcher found. A pipeline in name only.

The problem? I'd described the what but not the why. I hadn't told the LLM that the entire point was differentiation, that every node needed to carry forward the unique angle from research. Once I started ending prompts with "Analyse the requirement and the current codebase, then ask me questions before you start", the LLM began asking things like: "Should the writing node receive the full gap analysis or just a summary?" Those questions saved me days.

Most AI-assisted projects go sideways not because the LLM can't code, but because it assumed something the developer never confirmed. Good developers manage that loop: give context, invite questions, review output, iterate.

Taste Is the New Moat

KubePath taught me this. The AI generated all 38 chapters of Kubernetes content, the quiz system, the command validation. Functionally complete. But the first version felt like a textbook pasted into a terminal.

What made it engaging wasn't the code. It was decisions no LLM suggested: the 12-level progression system (from "Pod Seedling" to "Kubernetes Kurator"), streak tracking with milestone celebrations at 3, 7, and 30 days, social sharing for achievements. The gamification that turns a dry topic into something you want to return to tomorrow. The AI built the quiz engine, but never once said, "This would be more engaging if users had a sense of progression."

The market is about to be flooded with AI-generated applications. Most will be mediocre. Taste isn't a nice-to-have. It's the moat.

We're All Testers Now

The mistake that cost me the most time: not writing tests first.

Early in Blog Agent, I let the AI generate the section critic node without tests. It worked fine in isolation. But when I added the refinement loop (sections scoring below 8/10 get automatically rewritten), the scoring logic silently shifted. Sections that should have been flagged were passing through. The quality I promised in the README wasn't being enforced.

That's when I restructured around testing. Unit tests fully mocked, running in under 5 seconds, no API keys needed. Integration tests with mocked LLM responses but real web tools validating the pipeline end-to-end. The tests became the contract: they told the LLM why each node existed and what behavior was expected. When I later added the API key rotation manager (rotating across 5 Google API keys with quota tracking), existing tests caught every regression before it reached main.

If the LLM is the engine, your tests are the guardrails. Without them, you're driving fast with no steering.

Architecture Is Still a Human Job

Give an LLM a function signature and a clear contract, and it'll nail it. Ask it to decide how an entire system should be structured, and it falls apart.

Blog Agent has a seven-phase pipeline: topic discovery, content landscape analysis, planning, research, writing, assembly, review. Each phase has state management through Pydantic models and routing logic for conditional refinement passes. When I asked the AI to scaffold this, it gave me a flat sequential script. No state machine, no graph-based orchestration, no conditional routing.

The decision to use LangGraph, to model each phase as a node with typed state transitions, to separate tools from nodes from graph logic, that was mine. The AI couldn't have known I'd need resumable jobs (so interrupted generations could pick up mid-pipeline) or quota-aware key rotation resetting at midnight Pacific. Those decisions come from understanding real constraints: free-tier API limits, long-running pipelines, the need to debug individual nodes without rerunning the whole graph.

SecretToEnv is the counterpoint. Read Kubernetes secrets, output a .env file. Click, the Kubernetes Python client, three files. Sometimes the best architectural decision is knowing when to stop.

Ideas Are the Ultimate Currency

All the skills above make you effective. They don't make you original.

KubePath didn't come from an AI suggestion. It came from frustration with how people learn Kubernetes: watching videos, reading docs, never actually breaking things. The idea of learning Kubernetes from your terminal, with interactive lessons that escalate from theory to debugging broken deployments, was a human insight. Blog Agent came from noticing that AI-generated posts are generic rehashes. The insight that an agent should read the top 10 articles first and find gaps before writing a single word, no LLM proposed that.

There are no shortcuts here. You build this instinct by reading codebases, not just documentation. Study why Redis chose a single-threaded model. Read the design documents behind Kubernetes. Understand why SQLite decided a single-file database was enough. The more architectures you absorb, the sharper your instincts get.

AI can execute your ideas at unprecedented speed. But it can't have them for you.

The Uncomfortable Truth

Here's the contrarian take I keep circling back to: the developers who resist AI-assisted coding aren't falling behind. They're becoming extinct.

I don't mean that dramatically. I mean it practically. In three months, I shipped what would have taken me over a year working alone. The developers competing with me for the same opportunities aren't choosing between "AI-assisted" and "hand-written." They're choosing between "shipped" and "still building." The gap isn't closing. It's accelerating.

But here's the tension: the faster AI makes us, the more it rewards the skills that have nothing to do with code. Judgment. Taste. Architectural thinking. The ability to see around corners. We're entering an era where the best developers will write the least code and make the most decisions.

I started this experiment expecting to write about productivity. And yes, KubePath went from idea to 38 chapters with 600+ tests in days. Blog Agent's seven-phase pipeline shipped in under a month. That velocity is real.

But the deeper lesson was about everything the AI couldn't do. It couldn't connect Blog Agent's pipeline without me explaining the why. It couldn't make KubePath engaging without my understanding of what motivates learners. It couldn't write tests for behavior that didn't exist yet. It couldn't stop itself from over-engineering SecretToEnv.

The tools have changed. The bar for entry has disappeared. And the developers who thrive won't be the ones who code the best. They'll be the ones who think the best.

The only question left is whether you'll be directing the orchestra, or watching from the audience.

Comments (1)

J
Jerry 3 days ago
Your point about coding becoming a commodity reminds me of how shipbuilding used to work.

In the past, explorers first had to design the vessel, source materials, and physically build the boat before they could even think about crossing the ocean. The act of construction itself was the bottleneck.

Today, it’s as if ships can be conjured instantly. The bottleneck is no longer building the boat — it’s knowing where to sail, why you’re sailing, and how to survive the journey. A bad plan with a perfect ship still sinks.

AI feels similar. We can generate the “boat” (code) in seconds, but direction, navigation, trade-offs, and purpose remain deeply human responsibilities. The value shifts from craftsmanship of construction to clarity of intent.

In that sense, developers aren’t losing relevance — they’re becoming navigators instead of shipwrights.
?

Leave a comment