Ximedes Blog

The Human Firewall: A Senior Engineer’s Defense Against AI "Spaghetti Code"

Written by Antonis Kazoulis | 6/02/2026

The conversations about AI often happen in the boardroom, focusing on efficiency and bottom lines. But the real battle is happening in the terminal, where senior engineers are learning that their new assistant is fast, confident, and occasionally dangerous.

In our previous conversation with Ximedes CEO Joris Portegies Zwart, we discussed the strategic shift toward AI: a move away from "panic or evangelism" toward pragmatic adoption. But strategy is nothing without execution. If the CEO dictates the destination, the engineers are the ones driving the car and as it turns out, when you put an AI engine in that car, it drives fast. Very fast.

To understand the tactical reality of this shift, we sat down with Jan, a Ximedes Senior Software Engineer who isn't interested in the hype cycle. He is interested in shipping clean, functional code. His experience offers a sobering counter-narrative to the idea that AI makes coding "easy." It doesn't. It makes coding faster, but it makes engineering, the act of structuring, verifying, and architectural planning, harder and more critical than ever.

Here is our conversation on the new discipline of AI-assisted engineering.

The Firehose Effect

Question: Joris mentioned the danger of the "firehose" effect: where AI generates code faster than we can think. From a technical perspective, how has your personal definition of "Done" changed?

Answer: The definition of "Done" hasn't changed, but the path to getting there has to be much more rigid to survive that firehose. The speed is real. If you aren't careful, you get buried in code you don't fully understand.

My strategy is to separate planning from execution. I never just say "write this code." For most tasks, I ask the AI agent to generate a plan first. I want to see how it intends to implement a feature, often asking for a logic breakdown before a single line of syntax is written. We discuss that plan. I challenge it. Only when I am happy with the blueprint do I let it generate the code.

Question: One of the biggest technical hurdles is the "context window": AI doesn't know the whole repo. When you are refactoring a complex legacy module, what is your specific technical workflow?

Answer: Context is everything. For new features, I write small specification documents. I ask the agent to read them and explicitly ask if anything is unclear before we proceed.

For larger refactorings, I treat the AI like a new team member who needs to be onboarded. I ask it to write a plan and save it as a markdown file. This file becomes the project's external memory. The next morning, I don't have to re-explain the entire architecture. I just say: "Read the plan.md file and start phase 2." It turns the session into a persistent workflow rather than a series of disconnected chats.

The Architecture vs. Implementation Gap

Question: There is a fear that if we rely on AI too much, we stop understanding the deeper architecture. Have you caught yourself accepting a suboptimal pattern just because the AI suggested it and it "worked"?

Answer: Definitely. In software, there are ten ways to solve a problem, but only one or two that fit your specific architecture. If you don't monitor the output, you end up with "sort-of-working" code. It functions, but it’s architecturally messy.

You have to be constantly busy verifying that the code aligns with the design. I’ve found that using agent files, like a CLAUDE.md in the repository, helps significantly. These files act as guardrails, instructing the AI on our specific coding standards and patterns so it doesn't drift.

Interestingly, I sometimes ask the agent to review its own code at the end of the day. It often spots mistakes it made hours earlier. It gives a fresh perspective, which is strange to say about a machine. Why didn't it just do it right the first time? But that recursive review process is valuable.

The Confident Hallucination

Question: AI code often compiles perfectly but fails on logic. Can you give us a specific example of a "confident hallucination" you’ve encountered recently?

Answer: We recently needed to generate XML messages for a financial project. The schema had strict non-nullable numeric fields. The AI didn't have the specific values in its context, so instead of stopping to ask, "Hey, I don't have a value for this field," it just inserted a -1.

It compiled, it looked valid, but in a financial system, a -1 can wreak havoc. That is the danger: it lacked the awareness to say "I don't know." It prioritized completing the task over logical correctness. That is where the human engineer is irreplaceable: spotting the valid syntax that carries invalid logic.

The Junior Trap

Question: Joris argued that AI replaces the keyboard, not the engineer. But that intuition comes from years of manual coding. Are you worried that Juniors are failing to develop the "muscle memory" needed to debug complex issues?

Answer: There is recent research from Anthropic suggesting this risk is real: that juniors might not learn the deep patterns if they bypass the struggle of writing syntax. Personally, I am not yet worried. AI brings development to a different level. Juniors will simply learn to learn differently.

I recently saw that Claude Code has an "output-style" feature with a "Learning" mode. Instead of just dumping the answer, it explains the choice or asks the user to implement specific parts. If we use tools like that, we can turn the AI from a crutch into a tutor.

The New Stack

Question: Let’s get specific. Are you using Claude via the chat interface, or is it integrated into the IDE?

Answer: I work directly from the terminal. The terminal is now the most important part of my IDE because that is where the agent lives. I’d say 90% of my time is spent there.

There is an IntelliJ plugin that allows you to highlight code and send it to the terminal context, which saves typing class names, but the workflow has definitely shifted away from the graphical interface and back to the command line.

Question: If writing boilerplate is solved, what is the new most valuable technical skill for a developer at Ximedes?

Answer: It is tempting to say "prompt engineering," but I think the fundamentals remain. You still need to understand the domain. You still need to communicate with the team.

However, the role is shifting toward guidance. We are moving from writing code to guiding a project. The tools are strong, but they are like junior developers with infinite speed and zero long-term memory. The senior engineer’s job is to provide the direction, the context, and the constraints that keep that speed from becoming a disaster.