Writing
From Vibe Coding to Agentic Engineering
Why AI coding feels chaotic, why vibe coding breaks down, and what a more structured approach looks like.
Series
Engineering in the Age of AI
Part 1 of 3
- Part 1: From Vibe Coding to Agentic Engineering
- Part 2: How to Actually Build with AI Agents Without Creating Chaos
- Part 3: Maintaining AI-Built Systems Without Losing the Plot
AI might feel like a recent revolution in software engineering, but my journey with AI-augmented coding started over five years ago with the initial release of GitHub Copilot.
The first time I used it, I started typing a line of code and it felt like it read my mind, suggesting the rest of the function for me. This was not IntelliSense or autocomplete. This was whole blocks of code being generated exactly as I had imagined them. That might seem trivial now, but back in 2021, it felt like a superpower.
Fast forward to today and AI-enhanced coding has progressed rapidly. It has moved from code completion to agent-driven implementation that can write full features, tests, and documentation.
That changes the obvious question. If AI can now produce so much of the implementation, what exactly becomes more valuable about software engineering?
Are we moving toward vibe coding, prompt engineering, or something else entirely?
In this three-part series, I’ll explore what’s actually changing in software development, why AI coding feels chaotic in practice, and what seems to work.
Vibe coding
Vibe coding is a very recent term, and on the face of it, it sounds a bit ridiculous. Software engineering is supposed to be logical and deliberate, not something you do from a vibe.
But vibe coding is now part of the software development landscape, and it is not going away. It is empowering people who previously would not have considered building software to now create apps they had only imagined. Being able to code is no longer a barrier to creativity.
So what is vibe coding?
Vibe coding is the act of rapidly prompting an AI tool in natural language to build and iterate on your idea. It is fast-paced. The main goal is to build something that matches your vision. Code quality is secondary. If it works, that is good enough.
The workflow for vibe coding usually looks like this:
- Prompt - tell the AI to build something, add a feature, fix a bug, or tweak behaviour.
- Generate code - the AI writes the code, making its own decisions on architecture, tools, and libraries.
- Test and debug - run it and check it does what you intended. Tweak if needed.
- Repeat - iterate until you have built what you envisioned.
From a non-technical perspective, it is easy to see the appeal. It feels like your first “Hello, world!” moment all over again. Suddenly, you can build the thing you have been thinking about.
But is it maintainable? Performant? Accessible?
Why vibe coding breaks down
The limitations only really show up once you try to scale this approach.
Without strong guardrails, AI tends to drift architecturally. Each new prompt gives it a partial reset, so it solves the problem in whatever way looks best in that moment. That can be fine in isolation, but across a growing codebase it quickly leads to inconsistency and technical debt.
For a while, this was how engineers were using AI too. You would type a long prompt, paste in some code, hit enter, and expect it to come back with a feature or a fix. That works for smaller tasks. Across a larger system, it quickly becomes frustrating.
The reason is simple. The AI does not have full awareness of your system. It does not really understand your architecture, dependencies, side effects, or the broader goals of the project. It is trying its best with a limited view of the world.
Unlike a human engineer, it will not remember the system. Each interaction is bounded by a context window, so as the conversation grows, things get lost. You have probably seen this yourself. It remembers the first thing you told it and the last thing you said, but the middle is where things start to drift.
A common example is that one prompt gives you a clean service layer, the next adds business logic into a controller, and the one after that introduces a different validation pattern again. Each step can look reasonable on its own. Together, they pull the system apart.
Without something stable to refer back to, the AI starts optimising for what is immediately in front of it rather than the system as a whole.
We need a way to anchor the AI’s context.
Introducing agentic engineering
The shift starts when you stop treating AI as a one-shot code generator and start treating it as a contributor inside a structured engineering workflow.
That is the difference.
The difference between vibe coding and what I’ll call agentic engineering is that instead of asking for full features and iterating without any guardrails, you first put some structure in place. You design a system for the agent to work within.
You are no longer just producing the output by hand. You are designing the production system around it.
You are not removing engineering, you are making it explicit again.
At a high level, that looks like:
- defining what you actually want to build
- breaking it down into smaller tasks
- giving the agent enough context to do the job
- checking that the output actually matches the system you intended
The details of that system matter a lot, and I’ll get into them properly in the next article. But even at a high level, it is already clear where these tools are genuinely useful today.
Where AI works well today
This section will likely move on quickly, but right now AI agents are already capable across most core engineering tasks.
They are especially good at:
- Scaffolding - getting projects, features, and basic wiring in place quickly
- Feature work - as long as the task is clear and small enough, they can build surprisingly useful slices of functionality
- Tests - especially the kind of coverage we all know we should write more often
- Documentation - consistent, structured, and finally less likely to be skipped
The one area I still find a bit hit-and-miss is frontend work, especially when trying to match designs from tools like Figma. It gets close, but not always exactly right. You can get a working component very quickly, but spacing, hierarchy, and the little visual details often still need a human pass. That gap is definitely closing though. I am also starting to see a shift in the design-to-development process that pushes more of the HTML and CSS thinking further left, but that is probably one for another post.
What this means
The problem is not the AI. It is how we are using it.
If you treat it like a faster autocomplete tool, you just get faster chaos.
If you treat it as part of a structured engineering workflow, you get something much more useful. AI does not replace engineering systems. It amplifies them.
If your process is chaotic, AI will make it worse. If your process is structured, AI becomes a force multiplier.
The shift is not from developer to prompt engineer. It is from writing code to designing systems that produce it.
In the next article, I’ll break down the workflow we have been using in practice, how we structure the spec, break work into slices, and build the guardrails that stop the system drifting as it grows.